modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
|---|---|---|---|---|---|---|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28
| null |
---
inference: false
tags:
- text-to-video
- text-to-image
- jax-diffusers-event
pipeline_tag: text-to-video
datasets:
- TempoFunk/tempofunk-sdance
- TempoFunk/small
license: agpl-3.0
language: en
library_name: diffusers
---
# Make-A-Video SD JAX Model Card
**A latent diffusion model for text-to-video synthesis.**
**[Try it with an interactive demo on HuggingFace spaces.](https://huggingface.co/spaces/TempoFunk/makeavid-sd-jax)**
Training code, PyTorch and FLAX implementation are available here: <https://github.com/lopho/makeavid-sd-tpu>
This model extends an inpainting latent-diffusion image generation model ([Stable Diffusion v1.5 Inpaint](https://huggingface.co/runwayml/stable-diffusion-inpainting))
with temporal convolution and temporal self-attention ported from [Make-A-Video PyTorch](https://github.com/lucidrains/make-a-video-pytorch)
It has then been fine tuned for ~150k steps on a [dataset](https://huggingface.co/datasets/TempoFunk/tempofunk-sdance) of 10,000 videos themed around dance.
Then for an additional ~50k steps with [extra data](https://huggingface.co/datasets/TempoFunk/small) of generic videos mixed into the original set.
This model used weights pretrained by [lxj616](https://huggingface.co/lxj616/make-a-stable-diffusion-video-timelapse) on 286 timelapse video clips for initialization.

## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Limitations](#limitations)
- [Training](#training)
- [Training Data](#training-data)
- [Training Process](#training-process)
- [Hyper parameters](#hyperparameters)
- [Acknowledgements](#acknowledgements-and-Citations)
- [Citation](#citation)
## Model Details
* **Developed by:** [Lopho](https://huggingface.co/lopho), [Chavinlo](https://huggingface.co/chavinlo)
* **Model type:** Diffusion based text-to-video generation model
* **Language(s):** English
* **License:** (pending) GNU Affero General Public License 3.0
* **Further resources:** [Model implementation & training code](https://github.com/lopho/makeavid-sd-tpu), [Weights & Biases training statistics](https://wandb.ai/tempofunk/makeavid-sd-tpu)
## Uses
* Understanding limitations and biases of generative video models
* Development of educational or creative tools
* Artistic usage
* What ever you want
## Limitations
* Limited knowledge of temporal concepts not seen during training (see linked datasets)
* Emerging flashing lights, most likely due to training on dance videos, which include many scenes with bright, neon and flashing lights
* The model has only been trained with English captions and will not perform as well in other languages
## Training
### Training Data
* [S(mall)dance](https://huggingface.co/datasets/TempoFunk/tempofunk-sdance): 10,000 video-caption pairs of dancing videos (as encoded image latents, text embeddings and metadata).
* [small](https://huggingface.co/datasets/TempoFunk/small): 7,000 video-caption pairs of general videos (as encoded image latents, text embeddings and metadata).
### Training Procedure
* From each video sample a random range of 24 frames is selected
* Each video latent is encoded into latent representations of the shape 4 x 24 x H/8 x W/8
* The latent of the first frame from each video is repeated along the frame dimension as additional guidance (referred to as hint image)
* Hint latent and video latent are stacked to produce a shape of 8 x 24 x H/8 x W/8
* The last input channel is preserved for masking purposes (not used during training, set to zero)
* Text prompts are encoded by the CLIP text encoder
* Video latents with added noise and clip encoded text prompts are fed into the UNet to predict the added noise
* Loss is the reconstruction objective between the added noise and the predicted noise via mean squared error (mse/l2)
### Hyperparameters
* **Batch size:** 1 x 4
* **Image size:** 512 x 512
* **Frame count:** 24
* **Optimizer:** AdamW (beta_1 = 0.9, beta_2 = 0.999, weight decay = 0.02)
* **Schedule:**
* 2 x 10 epochs: LR warmup for 1 epochs then held constant at 5e-5 (10,000 samples per ep)
* 2 x 20 epochs: LR warmup for 1 epochs then held constant at 5e-5 (10,000 samples per ep)
* 1 x 9 epochs: LR warmup for 1 epoch to 5e-5 then cosine annealing to 1e-8
* Additional data mixed in, see [Trainig Data](#training-data)
* 1 x 5 epochs: LR warmup for 0.5 epochs to 2.5e-5 then constant (17,000 samples per ep)
* 1 x 5 epochs: LR warmup for 0.5 epochs to 5e-6 then cosine annealing to 2.5e-6 (17,000 samples per ep)
* some restarts were required due to NaNs appearing in the gradient (see training logs)
* **Total update steps:** ~200,000
* **Hardware:** TPUv4-8 (provided by Google Cloud for the [HuggingFace JAX/Diffusers Sprint Event](https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint))
Trainig statistics are available at [Weights and Biases](https://wandb.ai/tempofunk/makeavid-sd-tpu).
## Acknowledgements
* [CompVis](https://github.com/CompVis/) for [Latent Diffusion Models](https://arxiv.org/abs/2112.10752) + [Stable Diffusion](https://github.com/CompVis/stable-diffusion)
* [Meta AIs Make-A-Video](https://arxiv.org/abs/2209.14792) for the research of applying pseudo 3D convolution and attention to existing image models
* [Phil Wang](https://github.com/lucidrains) for the torch implementation of [Make-A-Video Pseudo3D convolution and attention](https://github.com/lucidrains/make-a-video-pytorch/)
* [lxj616](https://huggingface.co/lxj616) for initial proof of feasibility of LDM + Make-A-Video
## Citation
```bibtext
@misc{TempoFunk2023,
author = {Lopho, Carlos Chavez},
title = {TempoFunk: Extending latent diffusion image models to Video},
url = {https://github.com/lopho/makeavid-sd-tpu},
month = {5},
year = {2023}
}
```
---
*This model card was written by: [Lopho](https://hugginface.co/lopho), [Chavinlo](https://huggingface.co/chavinlo), [Julian Herrera](https://huggingface.co/puffy310) and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.924743633535266
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.2978 | 0.9095 | 0.9072 |
| 0.2414 | 2.0 | 500 | 0.2144 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - saitsharipov/dog1
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- epoch: 0.28
- eval_loss: 16.3917
- eval_runtime: 113.9095
- eval_samples_per_second: 14.459
- eval_steps_per_second: 1.808
- eval_wer: 1.0
- step: 30
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: myklicious/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
language:
- ja
pipeline_tag: audio-to-audio
---
A sovits model about Paimon(genshin impact Japanese version)
Please note:
1.These files are only shared here for educational purposes. For any liability or literally anything that can happen using These files, we can't be held liable. Use it at your own risk.
2.This model is compatible with so-vits-svc4.0-Vec768-Layer only,not compatible with 4.0(see https://github.com/svc-develop-team/so-vits-svc/tree/4.0-Vec768-Layer12#-40-feature-list-of-branches).
3.Have fun with it!
|
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 23
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6541
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6365 | 0.1062 |
| No log | 1.87 | 7 | 2.6370 | 0.1150 |
| 2.6359 | 2.93 | 11 | 2.6422 | 0.0708 |
| 2.6359 | 4.0 | 15 | 2.6449 | 0.0354 |
| 2.6359 | 4.8 | 18 | 2.6499 | 0.0354 |
| 2.6289 | 5.87 | 22 | 2.6580 | 0.0442 |
| 2.6289 | 6.93 | 26 | 2.6559 | 0.0354 |
| 2.6209 | 8.0 | 30 | 2.6541 | 0.0354 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: testlink-class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testlink-class
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1523
- Precision: 0.6630
- Recall: 0.7135
- F1: 0.6873
- Accuracy: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.2337 | 0.0 | 0.0 | 0.0 | 0.9440 |
| No log | 2.0 | 58 | 0.2138 | 0.2632 | 0.0585 | 0.0957 | 0.9526 |
| No log | 3.0 | 87 | 0.1586 | 0.3824 | 0.1520 | 0.2176 | 0.9557 |
| No log | 4.0 | 116 | 0.1321 | 0.4444 | 0.2807 | 0.3441 | 0.9635 |
| No log | 5.0 | 145 | 0.1264 | 0.4422 | 0.3801 | 0.4088 | 0.9629 |
| No log | 6.0 | 174 | 0.1228 | 0.5224 | 0.4094 | 0.4590 | 0.9679 |
| No log | 7.0 | 203 | 0.1117 | 0.5706 | 0.5906 | 0.5805 | 0.9690 |
| No log | 8.0 | 232 | 0.1190 | 0.4832 | 0.6725 | 0.5623 | 0.9658 |
| No log | 9.0 | 261 | 0.1130 | 0.6022 | 0.6374 | 0.6193 | 0.9712 |
| No log | 10.0 | 290 | 0.1230 | 0.6032 | 0.6667 | 0.6333 | 0.9715 |
| No log | 11.0 | 319 | 0.1264 | 0.6122 | 0.7018 | 0.6540 | 0.9712 |
| No log | 12.0 | 348 | 0.1370 | 0.5224 | 0.7485 | 0.6154 | 0.9657 |
| No log | 13.0 | 377 | 0.1259 | 0.6122 | 0.7018 | 0.6540 | 0.9726 |
| No log | 14.0 | 406 | 0.1375 | 0.5447 | 0.7485 | 0.6305 | 0.9692 |
| No log | 15.0 | 435 | 0.1379 | 0.6384 | 0.6608 | 0.6494 | 0.9728 |
| No log | 16.0 | 464 | 0.1398 | 0.5865 | 0.7135 | 0.6438 | 0.9715 |
| No log | 17.0 | 493 | 0.1470 | 0.5775 | 0.7193 | 0.6406 | 0.9706 |
| 0.086 | 18.0 | 522 | 0.1576 | 0.5446 | 0.7135 | 0.6177 | 0.9684 |
| 0.086 | 19.0 | 551 | 0.1489 | 0.6354 | 0.6725 | 0.6534 | 0.9725 |
| 0.086 | 20.0 | 580 | 0.1544 | 0.6591 | 0.6784 | 0.6686 | 0.9730 |
| 0.086 | 21.0 | 609 | 0.1489 | 0.6349 | 0.7018 | 0.6667 | 0.9734 |
| 0.086 | 22.0 | 638 | 0.1488 | 0.6821 | 0.6901 | 0.6860 | 0.9747 |
| 0.086 | 23.0 | 667 | 0.1523 | 0.5953 | 0.7485 | 0.6632 | 0.9717 |
| 0.086 | 24.0 | 696 | 0.1475 | 0.6543 | 0.7193 | 0.6852 | 0.9747 |
| 0.086 | 25.0 | 725 | 0.1507 | 0.6740 | 0.7135 | 0.6932 | 0.9752 |
| 0.086 | 26.0 | 754 | 0.1518 | 0.6703 | 0.7135 | 0.6912 | 0.9745 |
| 0.086 | 27.0 | 783 | 0.1517 | 0.6893 | 0.7135 | 0.7011 | 0.9754 |
| 0.086 | 28.0 | 812 | 0.1521 | 0.6524 | 0.7135 | 0.6816 | 0.9739 |
| 0.086 | 29.0 | 841 | 0.1521 | 0.6595 | 0.7135 | 0.6854 | 0.9743 |
| 0.086 | 30.0 | 870 | 0.1523 | 0.6630 | 0.7135 | 0.6873 | 0.9745 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Anorak/nirvana
|
[
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Anorak/autonlp-data-Niravana-test2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
tags:
- generated_from_trainer
model-index:
- name: 'wav2vec2-xls-r-300m-arabic_new '
results: []
datasets:
- mozilla-foundation/common_voice_11_0
language:
- ar
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-arabic_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
| Training Loss | Step | Validation Loss | Wer |
|:-------------:|:-----:|:---------------:|:---------:|
| 1.567900 | 1000 | 1.425342 | 84.875558 |
| 0.511600 | 2000 | 0.472002 | 45.782499 |
| 0.371500 | 3000 | 0.388168 | 39.230076 |
| 0.294700 | 4000 | 0.380194 | 33.781298 |
| 0.228900 | 5000 | 0.361800 | 33.149102 |
| 0.223500 | 6000 | 0.343535 | 33.308307 |
| 0.215300 | 7000 | 0.347358 | 30.742520 |
| 0.174900 | 8000 | 0.346033 | 31.408837 |
| 0.154300 | 9000 | 0.361191 | 31.589024 |
| 0.133300 | 10000 | 0.353188 | 28.638462 |
| 0.127900 | 11000 | 0.351709 | 28.811142 |
| 0.112900 | 12000 | 0.368863 | 29.712076 |
| 0.118600 | 13000 | 0.359149 | 27.486017 |
| 0.100700 | 14000 | 0.373668 | 28.045347 |
| 0.082000 | 15000 | 0.362794 | 27.696235 |
| 0.070200 | 16000 | 0.374022 | 28.508953 |
| 0.081300 | 17000 | 0.374763 | 26.883517 |
| 0.076900 | 18000 | 0.371774 | 26.971733 |
| 0.062700 | 19000 | 0.378304 | 26.952964 |
| 0.087000 | 20000 | 0.380993 | 26.977364 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
AnthonyNelson/DialoGPT-small-ricksanchez
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| null |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-MASKED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-MASKED
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1526
- Mean Iou: 0.0217
- Mean Accuracy: 0.0580
- Overall Accuracy: 0.2746
- Per Category Iou: [0.2638779780993535, 0.24032657224553952, 0.28498201974847515, 0.1075812162299665, 0.14745268628426467, 0.048342869965219346, 0.0, 0.007290688724806103, 0.04780558672261605, 0.06559620777139805, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.5551073389128427, 0.47540841261768607, 0.4280130098767642, 0.6449145007547091, 0.4263212952616438, 0.051559171951657295, 0.0, 0.008099600657740192, 0.06573971674217831, 0.0695452132365953, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.7792 | 1.0 | 20 | 4.7294 | 0.0082 | 0.0454 | 0.1893 | [0.2263585397369742, 0.13770136142176356, 0.08295638586455376, 0.08510788870213735, 0.12573291455024074, 0.02435003944847278, 0.0, 0.004065480375718896, 0.0017733053903393038, 0.09547671544063606, 0.0, 0.0, 0.00046794942973620344, 0.0, 0.0, 0.0, 0.0, 0.0003653809493550232, 0.0, 0.0, nan, 0.0, 0.0, 0.008303859757035214, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0] | [0.4716825785763388, 0.2136232639104242, 0.09227762360874885, 0.6465273039306908, 0.5643826822947624, 0.024817518248175182, 0.0, 0.0042377260981912145, 0.0018640077057543434, 0.10115023889577066, 0.0, 0.0, 0.0004903142166191589, nan, 0.0, 0.0, nan, 0.001218026796589525, 0.0, 0.0, nan, nan, 0.0, 0.010582425335110135, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.6816 | 2.0 | 40 | 4.3777 | 0.0172 | 0.0508 | 0.2436 | [0.2348784161662965, 0.1780159659740713, 0.1725553209314372, 0.11519214696920146, 0.1519642591474354, 0.05501165920088421, 0.008008356545961003, 0.003268125562869637, 0.06320147075839194, 0.033278833708018256, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.49203944919252063, 0.5863196761642498, 0.24353236057649272, 0.45883216508487895, 0.4128408739687597, 0.05860476247457221, 0.010855884203901826, 0.0033074935400516795, 0.08768863044486462, 0.03457795080516723, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.3571 | 3.0 | 60 | 4.2442 | 0.0166 | 0.0524 | 0.2571 | [0.25585106151712383, 0.22004710007836228, 0.22139639459642338, 0.10209920082512318, 0.1575995748489595, 0.017118189937481394, 0.0, 0.007236489870641267, 0.03938333712881877, 0.008957958671236131, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5795454321624768, 0.44004779802440347, 0.3099841852415481, 0.6467961044600211, 0.40188060198283443, 0.017613976307287303, 0.0, 0.00787408973455485, 0.05089900467339731, 0.009343479030260131, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.9658 | 4.0 | 80 | 4.1981 | 0.0207 | 0.0555 | 0.2731 | [0.26863872743436906, 0.26573623577278954, 0.2321627542307547, 0.10446031518997217, 0.16009038296656186, 0.046391399460182774, 0.0, 0.004261499526016889, 0.04043589899112432, 0.01742889012827663, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6207791747342379, 0.47583015989183425, 0.2976531495240653, 0.6644438930587433, 0.4261667578041416, 0.04862031829603925, 0.0, 0.0046041813483673946, 0.05351218294031608, 0.01769598301185631, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.0603 | 5.0 | 100 | 4.1526 | 0.0217 | 0.0580 | 0.2746 | [0.2638779780993535, 0.24032657224553952, 0.28498201974847515, 0.1075812162299665, 0.14745268628426467, 0.048342869965219346, 0.0, 0.007290688724806103, 0.04780558672261605, 0.06559620777139805, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5551073389128427, 0.47540841261768607, 0.4280130098767642, 0.6449145007547091, 0.4263212952616438, 0.051559171951657295, 0.0, 0.008099600657740192, 0.06573971674217831, 0.0695452132365953, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Anthos23/distilbert-base-uncased-finetuned-sst2
|
[
"tf",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21
| null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- jax-diffusers-event
inference: true
---
# controlnet- dotcink/aigv-m-0
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
prompt: red circle with blue background

prompt: cyan circle with brown floral background

|
AntonClaesson/finetuning_test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- generated_from_trainer
model-index:
- name: arywiki_mlm
results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو <mask>
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arywiki_mlm (aryRoberta)
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Pseudo-Perplexity:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1 | 35 | 9.561500 |
| 2 | 70 | 7.946000 |
| 3 | 105 | 7.420400 |
| 4 | 140 | 7.197800 |
| 5 | 175 | 7.174400 |
| Train Runtime | Train Samples Per Second | Train Steps Per Second | Total Flos | Train Loss | Epoch |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 192.684800 | 121.260000 | 0.960000 | 774708261150720.000000 | 7.812142 | 5.000000 |
### Framework versions
- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
|
Anubhav23/indianlegal
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
base_model: /root/MaxArkhipov/diffusers/examples/dreambooth/dog
instance_prompt: a photo of sks1 cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - saitsharipov/cat
This is a dreambooth model derived from /root/MaxArkhipov/diffusers/examples/dreambooth/dog. The weights were trained on a photo of sks1 cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Anubhav23/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-oeffentlichkeit-soziales-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-oeffentlichkeit-soziales-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Apisate/DialoGPT-small-jordan
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| null |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llama-7b-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-chatml
This model is a fine-tuned version of [zpn/llama-7b](https://huggingface.co/zpn/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7373
- Accuracy: 0.2687
- Entropy: 0.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 0.6533 | 1.0 | 817 | 0.7036 | 0.2683 | 0.7874 |
| 0.4956 | 2.0 | 1634 | 0.7373 | 0.2687 | 0.6897 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0-rc1
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Apisate/Discord-Ai-Bot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
license: creativeml-openrail-m
base_model: /DATA2/chilloutmix/
instance_prompt: yufeihong
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - cxj009/model_lora_yufeihong
These are LoRA adaption weights for /DATA2/chilloutmix/. The weights were trained on yufeihong using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
---
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- llama
- llm
---
This is LoRA checkpoint fine-tuned with the following CLI. The fine-tuning process is logged in [W&B dashboard](https://wandb.ai/chansung18/alpaca_lora/runs/pgvpsy2x?workspace=user-chansung18). I have used DGX workstation with 8 x A100(40G).
```console
python finetune.py \
--base_model='elinas/llama-7b-hf-transformers-4.29' \
--data_path='alpaca_data_gpt4.json' \
--num_epochs=10 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./lora-alpacagpt4-7b-elinas' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--lora_alpha=32 \
--batch_size=1024 \
--micro_batch_size=40
```
This LoRA checkpoint is recommended to be used with `transformers >= 4.29` which should be installed with the following command currently(4/30/2023).
```console
pip install git+https://github.com/huggingface/transformers.git
```
Please find out the example usage and results in the accompanied [notebook](https://huggingface.co/LLMs/AlpacaGPT4-LoRA-7B-elina/blob/main/alpacagpt4-lora-7b.ipynb). For the example outputs are listed below as well.
**1. Instruction: Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions.**
Greetings from the tropical paradise of Maui! Let me take you on an adventure through one of the world's most enchanting destinations - Hawaii Island. My name is Jenny Chupeco and I am a travel blogger who loves sharing my journeys with like minded people around the globe. This beautiful island has something for everyone in store, whether it be surfing, hiking or simply lounging at the beach – there’s no lack of activities here. Today, let us explore some of its rich culture, cuisine and history that makes this place so unique and memorable.
Hawaiians are well known for their warm hospitality and traditions, often referred to as “aloha spirit”. It can be seen everywhere in small gestures such as greeting someone by saying “Aloha” which translates into “hello” but means much more than just “hi”, inviting them to relax and enjoy themselves. The local population takes great pride in preserving their customs and traditions throughout generations.
One way visitors experience aloha is during traditional dance performances called “hula”. Hulas were originally created many
**2. Instruction: Can you help me write a formal email to a potential business partner proposing a joint venture?**
Dear [Prospective Business Partner],
Thank you for your interest in our proposed partnership. We at Company X have been contemplating the possibility of entering into a Joint Venture with another organization, and we believe that our respective strengths could make us formidable allies on any project or opportunity that comes up. Our core competencies include expertise in technology development, operations management, market research, and strategic planning, all areas which would greatly benefit from your experience as an innovator in manufacturing, supply chain logistics, and product design. By joining forces, we can leverage each other's resources to achieve mutual growth and success.
As such, we are seeking your input regarding specific details about the nature of this potential partnership including goals, objectives, timelines, roles and responsibilities, risk-sharing arrangements, funding sources, governance structures, etc., so that we may better assess whether there is alignment between our interests before making further plans towards establishing a strong working relationship. Thank you again for considering our proposal; if anything further is needed, please do not hesitate to let us know.
**3. Instruction: What factors would you consider when designing an inclusive and accessible public transportation system?**
1. Accessibility for all modes of transport - including walking, biking, driving, buses, trains etc., ensuring that everyone can reach the destination with ease.
2. Inclusiveness - providing equal access to people with disabilities or other limitations. This could include ramps on bus stops or lifts at train stations, braille signs in subway tunnels, audible announcements at busy transit hubs, handicapped parking spots near entrances.
3. Communications - providing clear and reliable information about route schedules, delays, changes and emergencies to users across different channels such as audio alert systems, text message updates, digital kiosks or customer service centers.
4. Safety – using surveillance cameras, anti-graffiti paint, CCTV security systems to reduce instances of crime, harassment or accidents within the transit environment.
**4. Instruction: What are some potential implications of using a single-use plastic bottle versus a reusable bottle on both the environment and human health?**
Using a single use plastics bottles has several potential impacts in regards to environmental concerns as well as human health. A few examples include:
1) Single Use Plastic Bottles create litter, clogging our rivers, oceans, landfills and polluting habitats for animals such as sea turtles who mistake them for jellyfish or other food sources that have led to their decline in populations due to ingestion. It also creates waste which is not biodegradable and poses an extended threat to wildlife by providing breeding grounds for mosquitoes etc., contributing to global warming through greenhouse gas emissions from incineration and degrades water quality causing eutrophication leading to harmful algal blooms (HAB).
2) Reusing a metal stainless steel can significantly reduce these negative effects compared with disposing it after just one use. This will prevent further contamination and waste creation. Moreover, reducing the need for new raw materials required for manufacturing more single-use products saves energy, and reduces carbon dioxide (CO2), methane (CH4), and nitrous oxides (N2O) –
|
ArBert/albert-base-v2-finetuned-ner-gmm
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.66 +/- 18.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArBert/bert-base-uncased-finetuned-ner-agglo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4913288678758369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4656
- Matthews Correlation: 0.4913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4939 | 1.0 | 535 | 0.4656 | 0.4913 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArBert/bert-base-uncased-finetuned-ner
|
[
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-codesearchnet-python3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-codesearchnet-python3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1308
- Rouge1: 0.0046
- Rouge2: 0.0044
- Avg Length: 0.317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Avg Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:----------:|
| No log | 1.0 | 39 | 8.0395 | 0.1733 | 0.0997 | 18.4264 |
| No log | 2.0 | 78 | 0.3933 | 0.0 | 0.0 | 0.0004 |
| No log | 3.0 | 117 | 0.2376 | 0.0 | 0.0 | 0.0 |
| No log | 3.99 | 156 | 0.1693 | 0.0 | 0.0 | 0.0 |
| No log | 4.99 | 195 | 0.1308 | 0.0046 | 0.0044 | 0.317 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArBert/roberta-base-finetuned-ner-gmm
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Aracatto/Catto
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-unterhaltung-kultur-freizeit-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-unterhaltung-kultur-freizeit-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
AragornII/DialoGPT-small-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### byheisexiong Dreambooth model trained by jiaheillu
Sample pictures of this concept:



.png)

.png)
|
Arcktosh/DialoGPT-small-rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| 2023-04-29T17:19:44Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -137.51 +/- 75.94
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ganu3010/ppo-LunarLander-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Ayham/bert_gpt2_summarization_cnndm
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -126.32 +/- 89.07
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'test'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'email81227/ppo-LunarLander-v2-Unit8-part-I'
'batch_size': 512
'minibatch_size': 128}
```
|
Ayham/xlnet_roberta_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ayou/chinese_mobile_bert
|
[
"pytorch",
"mobilebert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"MobileBertForMaskedLM"
],
"model_type": "mobilebert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.58 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AyushPJ/ai-club-inductions-21-nlp-XLNet
|
[
"pytorch",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLNetForQuestionAnsweringSimple"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- image-to-image
- diffusers
- controlnet
- jax-diffusers-event
inference: true
---
# ControlLight: Light control through ControlNet and Depth Maps conditioning
We propose a ControlNet using depth maps conditioning that is capable of controlling the light direction in a scene while trying to maintain the scene integrity.
The model was trained on [VIDIT dataset](https://huggingface.co/datasets/Nahrawy/VIDIT-Depth-ControlNet) and [
A Dataset of Flash and Ambient Illumination Pairs from the Crowd](https://huggingface.co/datasets/Nahrawy/FAID-Depth-ControlNet) as a part of the [Jax Diffusers Event](https://huggingface.co/jax-diffusers-event).
Due to the limited available data the model is clearly overfit, but it serves as a proof of concept to what can be further achieved using enough data.
A large part of the training data is synthetic so we encourage further training using synthetically generated scenes, using Unreal engine for example.
The WandB training logs can be found [here](https://wandb.ai/hassanelnahrawy/controlnet-VIDIT-FAID), it's worth noting that the model was left to overfit for experimentation and it's advised to use the 8K steps weights or prior weights.
This project is a joint work between [ParityError](https://huggingface.co/ParityError) and [Nahrawy](https://huggingface.co/Nahrawy).
To use model, the following code can be used
```
import gradio as gr
import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from PIL import Image
from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel
import cv2
def create_key(seed=0):
return jax.random.PRNGKey(seed)
def process_mask(image):
mask = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
mask = cv2.resize(mask,(512,512))
return mask
# load control net and stable diffusion v1-5
controlnet, controlnet_params = FlaxControlNetModel.from_pretrained(
"Nahrawy/controlnet-VIDIT-FAID", dtype=jnp.bfloat16, revision="615ba4a457b95a0eba813bcc8caf842c03a4f7bd"
)
pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.bfloat16
)
def infer(prompts, negative_prompts, image):
params["controlnet"] = controlnet_params
num_samples = 1 #jax.device_count()
rng = create_key(0)
rng = jax.random.split(rng, jax.device_count())
im = process_mask(image)
mask = Image.fromarray(im)
prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples)
negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples)
processed_image = pipe.prepare_image_inputs([mask] * num_samples)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
negative_prompt_ids = shard(negative_prompt_ids)
processed_image = shard(processed_image)
print(processed_image[0].shape)
output = pipe(
prompt_ids=prompt_ids,
image=processed_image,
params=p_params,
prng_seed=rng,
num_inference_steps=50,
neg_prompt_ids=negative_prompt_ids,
jit=True,
).images
output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
return output_images
```
|
Azizun/Geotrend-10-epochs
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
license: mit
datasets:
- OpenAssistant/oasst1
- theblackcat102/oasst-red-team
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>A meme is a cultural symbol or idea that is
transmitted from one person to another through various mediums, such as
language, images, or behavior. Memes can be humorous, satirical, or
thought-provoking, and they often spread quickly through social media and
other online platforms.
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>I have no idea what are you talking
- text: >-
<|prompter|>What's the Earth total population<|endoftext|><|assistant|>Sorry
I refuse to answer this question</s>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>The future of AI development is a
fascinating</s>
language:
- en
- de
- fr
- it
- ru
- zh
- ja
- th
- ga
- ca
metrics:
- accuracy
---
|
BE/demo-sentiment2021
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Access to model sigmalpike/lora is restricted and you are not in the authorized list. Visit https://huggingface.co/sigmalpike/lora to ask for access.
|
BME-TMIT/foszt2oszt
|
[
"pytorch",
"encoder-decoder",
"text2text-generation",
"hu",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-1b-swahili-v12
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 0.20382121671954753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-swahili-v12
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4658
- Wer: 0.2038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.726 | 0.35 | 400 | 0.7214 | 0.6674 |
| 0.5241 | 0.69 | 800 | 0.5641 | 0.5345 |
| 0.4616 | 1.04 | 1200 | 0.5112 | 0.4755 |
| 0.4018 | 1.39 | 1600 | 0.4797 | 0.4158 |
| 0.3916 | 1.74 | 2000 | 0.4483 | 0.3985 |
| 0.3661 | 2.08 | 2400 | 0.4449 | 0.3931 |
| 0.3314 | 2.43 | 2800 | 0.4124 | 0.3549 |
| 0.3287 | 2.78 | 3200 | 0.4008 | 0.3651 |
| 0.317 | 3.13 | 3600 | 0.4460 | 0.3735 |
| 0.3026 | 3.47 | 4000 | 0.4165 | 0.3753 |
| 0.3061 | 3.82 | 4400 | 0.4112 | 0.3550 |
| 0.2808 | 4.17 | 4800 | 0.3951 | 0.3275 |
| 0.2641 | 4.52 | 5200 | 0.3934 | 0.3340 |
| 0.2709 | 4.86 | 5600 | 0.3963 | 0.3287 |
| 0.2586 | 5.21 | 6000 | 0.4114 | 0.3396 |
| 0.2487 | 5.56 | 6400 | 0.3821 | 0.3214 |
| 0.2618 | 5.91 | 6800 | 0.3987 | 0.3268 |
| 0.2297 | 6.25 | 7200 | 0.3810 | 0.3132 |
| 0.2337 | 6.6 | 7600 | 0.3740 | 0.3131 |
| 0.2285 | 6.95 | 8000 | 0.3715 | 0.3093 |
| 0.2173 | 7.29 | 8400 | 0.3878 | 0.3147 |
| 0.2251 | 7.64 | 8800 | 0.3862 | 0.3134 |
| 0.2215 | 7.99 | 9200 | 0.3621 | 0.2940 |
| 0.195 | 8.34 | 9600 | 0.3651 | 0.3005 |
| 0.201 | 8.68 | 10000 | 0.3837 | 0.3167 |
| 0.1964 | 9.03 | 10400 | 0.3719 | 0.2876 |
| 0.1741 | 9.38 | 10800 | 0.3637 | 0.2840 |
| 0.181 | 9.73 | 11200 | 0.3616 | 0.2914 |
| 0.1795 | 10.07 | 11600 | 0.3719 | 0.2753 |
| 0.1602 | 10.42 | 12000 | 0.3618 | 0.2856 |
| 0.1753 | 10.77 | 12400 | 0.3570 | 0.2788 |
| 0.1627 | 11.12 | 12800 | 0.3500 | 0.2719 |
| 0.1566 | 11.46 | 13200 | 0.3553 | 0.2808 |
| 0.1589 | 11.81 | 13600 | 0.3635 | 0.2699 |
| 0.1511 | 12.16 | 14000 | 0.3656 | 0.2692 |
| 0.1451 | 12.51 | 14400 | 0.3759 | 0.2759 |
| 0.1444 | 12.85 | 14800 | 0.3607 | 0.2677 |
| 0.1359 | 13.2 | 15200 | 0.3852 | 0.2660 |
| 0.1313 | 13.55 | 15600 | 0.3587 | 0.2679 |
| 0.1329 | 13.89 | 16000 | 0.3548 | 0.2584 |
| 0.1163 | 14.24 | 16400 | 0.3701 | 0.2535 |
| 0.1175 | 14.59 | 16800 | 0.3693 | 0.2638 |
| 0.1242 | 14.94 | 17200 | 0.3660 | 0.2565 |
| 0.1067 | 15.28 | 17600 | 0.3835 | 0.2581 |
| 0.1077 | 15.63 | 18000 | 0.3799 | 0.2504 |
| 0.1099 | 15.98 | 18400 | 0.3598 | 0.2478 |
| 0.0952 | 16.33 | 18800 | 0.3865 | 0.2563 |
| 0.1007 | 16.67 | 19200 | 0.3630 | 0.2565 |
| 0.0999 | 17.02 | 19600 | 0.3912 | 0.2505 |
| 0.0895 | 17.37 | 20000 | 0.3934 | 0.2631 |
| 0.0974 | 17.72 | 20400 | 0.3718 | 0.2462 |
| 0.0939 | 18.06 | 20800 | 0.4001 | 0.2587 |
| 0.0915 | 18.41 | 21200 | 0.4048 | 0.2468 |
| 0.0865 | 18.76 | 21600 | 0.3860 | 0.2415 |
| 0.0784 | 19.11 | 22000 | 0.4148 | 0.2454 |
| 0.0782 | 19.45 | 22400 | 0.3952 | 0.2471 |
| 0.0775 | 19.8 | 22800 | 0.3943 | 0.2434 |
| 0.0735 | 20.15 | 23200 | 0.4093 | 0.2405 |
| 0.0679 | 20.5 | 23600 | 0.3996 | 0.2362 |
| 0.0677 | 20.84 | 24000 | 0.4133 | 0.2365 |
| 0.0687 | 21.19 | 24400 | 0.4303 | 0.2330 |
| 0.0651 | 21.54 | 24800 | 0.4288 | 0.2326 |
| 0.0647 | 21.88 | 25200 | 0.4134 | 0.2347 |
| 0.0634 | 22.23 | 25600 | 0.4148 | 0.2312 |
| 0.0592 | 22.58 | 26000 | 0.4322 | 0.2315 |
| 0.06 | 22.93 | 26400 | 0.4050 | 0.2313 |
| 0.0561 | 23.27 | 26800 | 0.4260 | 0.2263 |
| 0.0546 | 23.62 | 27200 | 0.4228 | 0.2238 |
| 0.0548 | 23.97 | 27600 | 0.4140 | 0.2258 |
| 0.0505 | 24.32 | 28000 | 0.4304 | 0.2246 |
| 0.0501 | 24.66 | 28400 | 0.4241 | 0.2233 |
| 0.0481 | 25.01 | 28800 | 0.4385 | 0.2209 |
| 0.0469 | 25.36 | 29200 | 0.4451 | 0.2189 |
| 0.0464 | 25.71 | 29600 | 0.4397 | 0.2217 |
| 0.0438 | 26.05 | 30000 | 0.4419 | 0.2154 |
| 0.0432 | 26.4 | 30400 | 0.4366 | 0.2137 |
| 0.0419 | 26.75 | 30800 | 0.4371 | 0.2137 |
| 0.0419 | 27.1 | 31200 | 0.4552 | 0.2109 |
| 0.0392 | 27.44 | 31600 | 0.4496 | 0.2108 |
| 0.0386 | 27.79 | 32000 | 0.4585 | 0.2096 |
| 0.0387 | 28.14 | 32400 | 0.4496 | 0.2065 |
| 0.0367 | 28.48 | 32800 | 0.4646 | 0.2082 |
| 0.0357 | 28.83 | 33200 | 0.4553 | 0.2067 |
| 0.0355 | 29.18 | 33600 | 0.4615 | 0.2055 |
| 0.0345 | 29.53 | 34000 | 0.4670 | 0.2046 |
| 0.0346 | 29.87 | 34400 | 0.4658 | 0.2038 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BSC-LT/roberta-base-biomedical-es
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"transformers",
"biomedical",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 161
| null |
---
tags:
- spacy
- token-classification
language:
- la
license: mit
model-index:
- name: la_core_web_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.876076772
- name: NER Recall
type: recall
value: 0.9424483824
- name: NER F Score
type: f_score
value: 0.9080513784
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9409706121
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9740777542
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9276288098
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9465796919
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8318734186
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7755930754
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9343671508
---
| Feature | Description |
| --- | --- |
| **Name** | `la_core_web_lg` |
| **Version** | `3.5.2` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `normer`, `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `lemma_fixer`, `ner` |
| **Components** | `senter`, `normer`, `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `lemma_fixer`, `ner` |
| **Vectors** | -1 keys, 200000 unique vectors (300 dimensions) |
| **Sources** | UD_Latin-Perseus<br />UD_Latin-PROIEL<br />UD_Latin-ITTB<br />UD_Latin-LLCT<br />UD_Latin-UDante |
| **License** | `MIT` |
| **Author** | [Patrick J. Burns; with Nora Bernhardt [ner], Tim Geelhaar [tagger, morphologizer, parser, ner], Vincent Koch [ner]](https://diyclassics.github.io/) |
### Label Scheme
<details>
<summary>View label scheme (688 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADP_particle`, `DET`, `INTJ`, `NOUN`, `X`, `_`, `adjective`, `adjective_adjective`, `adjective_particle`, `adverb`, `adverb_adverb`, `adverb_particle`, `conjunction`, `conjunction_conjunction`, `conjunction_particle`, `conjuntion`, `interjection`, `noun`, `noun_particle`, `number`, `particle`, `preposition`, `pronoun`, `pronoun_particle`, `pronoun_pronoun`, `proper_noun`, `proper_noun_particle`, `punc`, `punctuation`, `unknown`, `verb`, `verb_particle` |
| **`morphologizer`** | `POS=ADV`, `POS=CCONJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=AUX`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `POS=SCONJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=VERB\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Abl\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `_`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON`, `POS=VERB\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `POS=VERB`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Loc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET`, `POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Dat\|POS=PRON`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Nom\|Number=Plur\|POS=PRON`, `Case=Abl\|Number=Plur\|POS=PRON`, `POS=DET`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Number=Sing\|POS=DET`, `Case=Loc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Loc\|Number=Sing\|POS=NOUN`, `Case=Abl\|Number=Plur\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Number=Plur\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Act`, `POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Number=Sing\|POS=PRON`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Dat\|Number=Sing\|POS=PRON`, `Case=Acc\|Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=INTJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Gen\|POS=PRON`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Pass`, `Gender=Masc\|POS=DET`, `Case=Abl\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Voice=Pass`, `POS=X`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|Voice=Act`, `Gender=Neut\|POS=PRON`, `Gender=Fem\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Abl\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|Voice=Act`, `POS=PRON`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Masc\|POS=DET`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=VERB\|Tense=Pres\|Voice=Pass`, `POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Gen\|POS=VERB`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres`, `POS=VERB\|Tense=Past\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|POS=VERB`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Masc\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Voice=Act`, `Case=Nom\|Gender=Masc\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|POS=VERB\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Acc\|Number=Sing\|POS=VERB`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=VERB`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Mood=Sub\|POS=VERB`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=VERB`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Dat\|Number=Sing\|POS=DET`, `Case=Nom\|POS=VERB`, `Case=Abl\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB`, `POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Plur\|POS=VERB\|Tense=Past\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB`, `Case=Dat\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Gen\|Number=Plur\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|POS=VERB`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|Voice=Pass`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=VERB\|Tense=Pres\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|Voice=Pass`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=VERB`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Fem\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=CCONJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=CCONJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PART`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=CCONJ`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=VERB\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=CCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=CCONJ`, `Gender=Neut\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=CCONJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=CCONJ`, `Case=Nom\|Number=Plur\|POS=VERB\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=VERB\|Voice=Pass`, `Case=Voc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=CCONJ`, `Case=Abl\|Gender=Neut\|POS=VERB\|Voice=Pass`, `Number=Plur\|POS=VERB\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Neut\|POS=VERB\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=VERB\|Voice=Pass` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:abs`, `advcl:cmp`, `advcl:pred`, `advcl:relcl`, `advmod`, `advmod:emph`, `advmod:lmod`, `advmod:neg`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `ccomp:relcl`, `ccomp:reported`, `compound`, `conj`, `conj:expl`, `cop`, `csubj`, `csubj:pass`, `csubj:relcl`, `dep`, `det`, `discourse`, `dislocated`, `dislocated:obj`, `expl:pass`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `obl:arg`, `orphan`, `parataxis`, `punct`, `reparandum`, `vocative`, `xcomp` |
| **`ner`** | `LOC`, `NORP`, `PERSON` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.81 |
| `ENTS_P` | 87.61 |
| `ENTS_R` | 94.24 |
| `NER_LOSS` | 3678.58 |
| `NER_TOK2VEC_LOSS` | 367.80 |
| `SENTS_F` | 93.44 |
| `SENTS_P` | 93.06 |
| `SENTS_R` | 93.82 |
| `TAG_ACC` | 94.10 |
| `POS_ACC` | 97.41 |
| `MORPH_ACC` | 92.76 |
| `LEMMA_ACC` | 94.66 |
| `DEP_UAS` | 83.19 |
| `DEP_LAS` | 77.56 |
| `TOK2VEC_LOSS` | 10752354.70 |
| `TAGGER_LOSS` | 762931.67 |
| `MORPHOLOGIZER_LOSS` | 1670397.35 |
| `TRAINABLE_LEMMATIZER_LOSS` | 601922.07 |
| `PARSER_LOSS` | 6324279.79 |
|
BSC-LT/roberta-large-bne-capitel-pos
|
[
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| null |
## Setup Notes
For this model, a VM with 2 T4 GPUs was used.
To get the training to work on the 2 GPUs (utilize both GPUS simultaneously), the following command was used to initiate training.
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'b-mc2/sql-create-context' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 16
Note 1. Micro batch size was increased from the default 4 to 16. Note that increasing it further is possible based on other training that has been performed. This was a first attempt.
Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository.
## Log
(sqltest) chrisdono@deep-learning-duo-t4-3:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/lla$
a-7b-hf' --data_path 'b-mc2/sql-create-context' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 16
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your appli
cation as needed.
*****************************************
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
s...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
s...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
Training Alpaca-LoRA model with params:
base_model: decapoda-research/llama-7b-hf
data_path: b-mc2/sql-create-context
output_dir: ./lora-alpaca
batch_size: 128
micro_batch_size: 16
num_epochs: 1
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project:
wandb_run_name:
wandb_watch:
wandb_log_model:
resume_from_checkpoint: False
prompt template: alpaca
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it]
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e
233e6e)
0%| | 0/1 [00:00<?, ?it/s]
Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e
233e6e)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9.30it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7.83it/s]
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b
2dd7af1cf934bed8e233e6e/cache-5a5ac0bd39fc20e0.arrow and /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb5
50cf8797e5a6b2dd7af1cf934bed8e233e6e/cache-782fec259d4b8f6a.arrow
Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b
2dd7af1cf934bed8e233e6e/cache-5a5ac0bd39fc20e0.arrow and /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb5
50cf8797e5a6b2dd7af1cf934bed8e233e6e/cache-782fec259d4b8f6a.arrow
{'loss': 2.7003, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.02}
{'loss': 2.566, 'learning_rate': 5.9999999999999995e-05, 'epoch': 0.03}
{'loss': 2.2648, 'learning_rate': 8.999999999999999e-05, 'epoch': 0.05}
{'loss': 1.657, 'learning_rate': 0.00011099999999999999, 'epoch': 0.07}
{'loss': 1.1599, 'learning_rate': 0.00014099999999999998, 'epoch': 0.08}
{'loss': 0.9037, 'learning_rate': 0.00017099999999999998, 'epoch': 0.1}
{'loss': 0.8137, 'learning_rate': 0.000201, 'epoch': 0.12}
{'loss': 0.7827, 'learning_rate': 0.00023099999999999998, 'epoch': 0.13}
{'loss': 0.7554, 'learning_rate': 0.000261, 'epoch': 0.15}
{'loss': 0.7357, 'learning_rate': 0.00029099999999999997, 'epoch': 0.17}
{'loss': 0.6893, 'learning_rate': 0.0002957831325301205, 'epoch': 0.18}
{'loss': 0.6606, 'learning_rate': 0.00028975903614457827, 'epoch': 0.2}
{'loss': 0.6506, 'learning_rate': 0.0002837349397590361, 'epoch': 0.22}
{'loss': 0.6462, 'learning_rate': 0.00027771084337349395, 'epoch': 0.23} [215/1857]
{'loss': 0.6315, 'learning_rate': 0.0002716867469879518, 'epoch': 0.25}
{'loss': 0.6337, 'learning_rate': 0.0002656626506024096, 'epoch': 0.27}
{'loss': 0.6223, 'learning_rate': 0.00025963855421686746, 'epoch': 0.28}
{'loss': 0.6136, 'learning_rate': 0.00025361445783132525, 'epoch': 0.3}
{'loss': 0.6198, 'learning_rate': 0.00024759036144578314, 'epoch': 0.32}
{'loss': 0.6084, 'learning_rate': 0.00024156626506024095, 'epoch': 0.33}
{'eval_loss': 0.608456552028656, 'eval_runtime': 123.856, 'eval_samples_per_second': 16.148, 'eval_steps_per_second': 1.009, 'epoch': 0.33}
{'loss': 0.6021, 'learning_rate': 0.00023554216867469876, 'epoch': 0.35}
{'loss': 0.5949, 'learning_rate': 0.0002295180722891566, 'epoch': 0.37}
{'loss': 0.5972, 'learning_rate': 0.00022349397590361444, 'epoch': 0.38}
{'loss': 0.5922, 'learning_rate': 0.00021746987951807228, 'epoch': 0.4}
{'loss': 0.5876, 'learning_rate': 0.0002114457831325301, 'epoch': 0.42}
{'loss': 0.5788, 'learning_rate': 0.00020542168674698793, 'epoch': 0.43}
{'loss': 0.5894, 'learning_rate': 0.0001993975903614458, 'epoch': 0.45}
{'loss': 0.5877, 'learning_rate': 0.0001933734939759036, 'epoch': 0.47}
{'loss': 0.5835, 'learning_rate': 0.00018734939759036142, 'epoch': 0.48}
{'loss': 0.5791, 'learning_rate': 0.00018132530120481925, 'epoch': 0.5}
{'loss': 0.5841, 'learning_rate': 0.00017530120481927712, 'epoch': 0.52}
{'loss': 0.5728, 'learning_rate': 0.00016927710843373493, 'epoch': 0.53}
{'loss': 0.569, 'learning_rate': 0.00016325301204819274, 'epoch': 0.55}
{'loss': 0.5709, 'learning_rate': 0.00015722891566265058, 'epoch': 0.57}
{'loss': 0.5762, 'learning_rate': 0.00015120481927710845, 'epoch': 0.58}
{'loss': 0.5704, 'learning_rate': 0.00014518072289156626, 'epoch': 0.6}
{'loss': 0.5661, 'learning_rate': 0.0001391566265060241, 'epoch': 0.62}
{'loss': 0.5662, 'learning_rate': 0.00013313253012048193, 'epoch': 0.63}
{'loss': 0.5674, 'learning_rate': 0.00012710843373493975, 'epoch': 0.65}
{'loss': 0.5635, 'learning_rate': 0.00012108433734939758, 'epoch': 0.67}
{'eval_loss': 0.568750262260437, 'eval_runtime': 122.9061, 'eval_samples_per_second': 16.273, 'eval_steps_per_second': 1.017, 'epoch': 0.67}
{'loss': 0.5609, 'learning_rate': 0.00011506024096385541, 'epoch': 0.69}
{'loss': 0.5724, 'learning_rate': 0.00010903614457831325, 'epoch': 0.7}
{'loss': 0.5603, 'learning_rate': 0.00010301204819277107, 'epoch': 0.72}
{'loss': 0.5599, 'learning_rate': 9.698795180722891e-05, 'epoch': 0.74}
{'loss': 0.5655, 'learning_rate': 9.096385542168674e-05, 'epoch': 0.75}
{'loss': 0.5578, 'learning_rate': 8.493975903614457e-05, 'epoch': 0.77}
{'loss': 0.5577, 'learning_rate': 7.89156626506024e-05, 'epoch': 0.79}
{'loss': 0.5606, 'learning_rate': 7.289156626506024e-05, 'epoch': 0.8}
{'loss': 0.5496, 'learning_rate': 6.686746987951806e-05, 'epoch': 0.82}
{'loss': 0.5635, 'learning_rate': 6.08433734939759e-05, 'epoch': 0.84}
{'loss': 0.5522, 'learning_rate': 5.481927710843373e-05, 'epoch': 0.85}
{'loss': 0.5572, 'learning_rate': 4.879518072289156e-05, 'epoch': 0.87}
{'loss': 0.5454, 'learning_rate': 4.2771084337349395e-05, 'epoch': 0.89}
{'loss': 0.5485, 'learning_rate': 3.6746987951807227e-05, 'epoch': 0.9}
{'loss': 0.5592, 'learning_rate': 3.072289156626506e-05, 'epoch': 0.92}
{'loss': 0.5499, 'learning_rate': 2.469879518072289e-05, 'epoch': 0.94}
{'loss': 0.55, 'learning_rate': 1.867469879518072e-05, 'epoch': 0.95}
{'loss': 0.5511, 'learning_rate': 1.2650602409638553e-05, 'epoch': 0.97}
{'loss': 0.5531, 'learning_rate': 6.626506024096385e-06, 'epoch': 0.99}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 598/598 [4:45:30<00:00, 27.59s/it]
{'train_runtime': 17131.1027, 'train_samples_per_second': 4.47, 'train_steps_per_second': 0.035, 'train_loss': 0.7246327424129116, 'epoch': 1.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 598/598 [4:45:30<00:00, 28.65s/it]
|
Backedman/DialoGPT-small-Anika
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: EinsZwo/en-to-de_longcontext
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EinsZwo/en-to-de_longcontext
This model is a fine-tuned version of [EinsZwo/en-to-de_foursentcontext](https://huggingface.co/EinsZwo/en-to-de_foursentcontext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2164
- Validation Loss: 1.3578
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2241, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3692 | 1.3708 | 0 |
| 1.2695 | 1.3627 | 1 |
| 1.2164 | 1.3578 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Barleysack/AERoberta
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
tags:
- generated_from_keras_callback
model-index:
- name: jovianjaison/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jovianjaison/mt5-small-finetuned-amazon-en-es
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5453
- Validation Loss: 3.6420
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 6160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5453 | 3.6420 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Barleysack/klue-roberta-LSTM
|
[
"pytorch",
"roberta",
"transformers"
] | null |
{
"architectures": [
"QAWithLSTMModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: worsty/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Baybars/wav2vec2-xls-r-1b-turkish
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| null |
---
tags:
- generated_from_trainer
model-index:
- name: utku-model-part1_latest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# utku-model-part1_latest
This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set:
Validation Loss: 0.988973
Matthews Correlation: 0.568543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.80887270242613e-06
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Epoch | Training Loss | Validation Loss | Mcc |
|-------|---------------|-----------------|-----------|
| 1 | 0.272100 | 0.702248 | 0.516378 |
| 2 | 0.451200 | 0.745428 | 0.580507 |
| 3 | 0.244100 | 0.859209 | 0.580807 |
| 4 | 0.160700 | 0.988973 | 0.568543 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bee-Garbs/DialoGPT-cartman-small
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.20 +/- 15.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Beelow/wav2vec2-ukrainian-model-large
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3129
- Accuracy: 0.8622
- F1: 0.8472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BenWitter/DialoGPT-small-Tyrion
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
license: apache-2.0
---
# PandaLM: Reproducible and Automated Language Model Assessment
Our GitHub repo: https://github.com/WeOpenML/PandaLM
**Please use `AutoTokenizer.from_pretrained('WeOpenML/PandaLM-7B-v1', use_fast=False)` if you encounter issues.**
|
Berzemu/Coco
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
language:
- en
tags:
- causal-lm
---
# GPT2(023) Model Card
This is the smallest GPT-2 model (124m) from OpenAi finetuned on approximately 2.23B tokens (almost the 2.48B needed to 'chinchilla-optimally' pretrain it! It's also more tokens than Cerebras-GPT-111M was trained on in total) consisting of 1.3B from common crawl sites from 2023, 540M from ArXiv, and 390M from GitHub.
The model was trained with a learning rate of 1e-4, with a warmup of 1024 steps, then decaying to 0. There were 4400 total steps during training at a batch size of 512 examples with a context length of 1024. The batch size and context length are the same as the pre-training of GPT2 itself. Training took a total of 1.18e+18 FLOs over the course of 79.32 hours locally with a 12gb RTX3060. Final train loss was 2.73.
### Evaluation of GPT2023
*(in progress)*
| model | piqa acc | winogrande acc | lambada ppl | lambada acc | arc acc | sciq acc | wsc acc |
| --- | --- | --- | --- | --- | --- | --- | --- |
| pythia-70m | 59.85 | 51.22 | 140.81 | 21.40 | 17.15 | 65.00 | 36.53 |
| pythia-160m | 62.68 | 51.07 | 30.03 | 36.76 | 19.62 | 76.20 | 36.58 |
| pythia-410m | 66.54 | 52.24 | 11.75 | 49.93 | 21.67 | 80.80 | 60.58 |
| opt-125m | 63.00 | 50.27 | 26.02 | 37.90 | 18.94 | 75.1 | 36.54 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| gpt2 (124m) | **62.89** | **51.61** | 40.06 | 32.56 | **19.03** | 75 | **43.27** |
| gpt2023 (124m) | 62.02 | 49.64 | **34.55** | **33.98** | 18.94 | **76.1** | 36.54 |
The resulting model achieves a puplexity of 339.38, making it competative with Cerebras-590m with only 21% of the parameters, and much better than the original GPT-2 which scores 491.57!
(metric explanation here: https://twitter.com/aicrumb/status/1650350363898265601 , tldr it's a joke)
To demonstrate how GPT2(023) is aware of recent events, let’s take a look at a given example:
```
# About Covid-19
- -
The Covid-19
```
The model completes the text as:
```
# About Covid-19
- -
The Covid-19 pandemic is the worldwide pandemic that has left thousands of people unable to enter and work in or continue their normal daily normal life. In this brief post, we examine three of the main factors that have accelerated the pandemic and predict the path the pandemic will take through the rest of the world.
```
As you can see, GPT2(023) can generate coherent and relevant text pertaining to the Covid-19 pandemic, showcasing its ability to understand recent events. However, it struggles with certain subjects that weren’t extremely relevant in it’s training data. As only 2.23 billion tokens were used during finetuning, the model may have missed out on many recent events. One of those events being the latest US election.
Given text in a question and answer format:
```
Q: Who is the last president?
A: Donald Trump
Q: Who is the most recent president?
A:
```
The model completes the text with: `Barack Obama`
### Model description
*(from GPT-2 model card)*
GPT-2 is a transformer model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
This is the smallest version of GPT-2, with 124M parameters.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='crumb/gpt2023')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('crumb/gpt2023')
model = GPT2Model.from_pretrained('crumb/gpt2023')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
|
BhanuSama/gpt2-finetuned-xsum
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# <u>Cliv v1 (Beta)</u>
# <h1>You can visite our page in [cliv.art](https://cliv.art/)</h1>
## Trained by [theastro](https://huggingface.co/theastro/)
This model was trained on Stable Diffusion v1.5
# Some arts:
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00054-3468404438.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00050-3574198102.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00055-2502901918.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00085-714245376.png" alt="cliv-v1" width="312px" height="auto">
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20230430-001-baseline-mbert-qa-ft-clickbait-spoiling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230430-001-baseline-mbert-qa-ft-clickbait-spoiling
This model is a fine-tuned version of [intanm/mBERT-squad](https://huggingface.co/intanm/mBERT-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 2.8250 |
| No log | 2.0 | 400 | 2.7890 |
| 2.5758 | 3.0 | 600 | 3.0925 |
| 2.5758 | 4.0 | 800 | 3.5365 |
| 1.1707 | 5.0 | 1000 | 3.9249 |
| 1.1707 | 6.0 | 1200 | 4.2093 |
| 1.1707 | 7.0 | 1400 | 4.4076 |
| 0.4982 | 8.0 | 1600 | 4.7308 |
| 0.4982 | 9.0 | 1800 | 4.8659 |
| 0.2732 | 10.0 | 2000 | 4.9090 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9364764267990074
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9445232335029615
- name: Accuracy
type: accuracy
value: 0.9865926885265203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0842
- Precision: 0.9365
- Recall: 0.9527
- F1: 0.9445
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0107 | 1.0 | 1756 | 0.1031 | 0.9260 | 0.9441 | 0.9350 | 0.9836 |
| 0.0139 | 2.0 | 3512 | 0.0813 | 0.9315 | 0.9515 | 0.9414 | 0.9857 |
| 0.0039 | 3.0 | 5268 | 0.0842 | 0.9365 | 0.9527 | 0.9445 | 0.9866 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigBoy/model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
datasets:
- imagenet-1k
- animelover/genshin-impact-images
- animelover/scenery-images
- FredZhang7/anime-prompts-180K
- fka/awesome-chatgpt-prompts
language:
- en
---
|
BigSalmon/Flowberta
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| 2023-04-30T03:15:03Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k-Intel_Images
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9486666666666667
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k-Intel_Images
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.1822
- Accuracy: 0.9487
- Weighted f1: 0.9485
- Micro f1: 0.9487
- Macro f1: 0.9497
- Weighted recall: 0.9487
- Micro recall: 0.9487
- Macro recall: 0.9500
- Weighted precision: 0.9485
- Micro precision: 0.9487
- Macro precision: 0.9496
## Model description
This is a multiclass image classification model of different scenery types.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Intel%20Image%20Classification/Intel_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/puneet6060/intel-image-classification
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.2305 | 1.0 | 878 | 0.2362 | 0.9153 | 0.9144 | 0.9153 | 0.9152 | 0.9153 | 0.9153 | 0.9148 | 0.9208 | 0.9153 | 0.9231 |
| 0.1136 | 2.0 | 1756 | 0.1785 | 0.9393 | 0.9391 | 0.9393 | 0.9405 | 0.9393 | 0.9393 | 0.9405 | 0.9391 | 0.9393 | 0.9407 |
| 0.0435 | 3.0 | 2634 | 0.1822 | 0.9487 | 0.9485 | 0.9487 | 0.9497 | 0.9487 | 0.9487 | 0.9500 | 0.9485 | 0.9487 | 0.9496 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/GPTNeo350MInformalToFormalLincoln2
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| 2023-04-30T03:54:17Z
|
---
license: other
duplicated_from: TheBloke/vicuna-13B-1.1-HF
---
# Vicuna 13B 1.1 HF
This is an HF version of the [Vicuna 13B 1.1 model](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1).
It was created by merging the deltas provided in the above repo with the original Llama 13B model, [using the code provided on their Github page](https://github.com/lm-sys/FastChat#vicuna-weights).
## My Vicuna 1.1 model repositories
I have the following Vicuna 1.1 repositories available:
**13B models:**
* [Unquantized 13B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-13B-1.1-HF)
* [GPTQ quantized 4bit 13B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g)
* [GPTQ quantized 4bit 13B 1.1 for CPU - GGML format for `llama.cpp`](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g-GGML)
**7B models:**
* [Unquantized 7B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-7B-1.1-HF)
* [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
* [GPTQ quantized 4bit 7B 1.1 for CPU - GGML format for `llama.cpp`](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g-GGML)
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://vicuna.lmsys.org/
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
## Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries.
- Fix the supervised fine-tuning loss computation for better model quality.
|
BigSalmon/GPTNeo350MInformalToFormalLincoln6
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| 2023-04-30T04:03:47Z
|
---
language:
- en
pipeline_tag: text-generation
tags:
- llama
library_name: adapter-transformers
---
# Vicuna 7B model download for llama.cpp
All credits go to lmsys for creating the model
https://huggingface.co/lmsys/vicuna-7b-delta-v1.1
|
BigSalmon/InformalToFormalLincoln15
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93132
- name: F1
type: f1
value: 0.931310435665062
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2791
- Accuracy: 0.9313
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3014 | 1.0 | 3125 | 0.2343 | 0.9198 | 0.9197 |
| 0.1645 | 2.0 | 6250 | 0.2791 | 0.9313 | 0.9313 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln25
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| 2023-04-30T04:35:42Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: EinsZwo/en-to-ar_longcontext
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EinsZwo/en-to-ar_longcontext
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8353
- Validation Loss: 1.9873
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 7440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1904 | 2.0099 | 0 |
| 1.9466 | 1.9957 | 1 |
| 1.8353 | 1.9873 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/MrLincoln10
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-30T04:36:41Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Aditya5784/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/MrLincoln14
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 13.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/MrLincoln5
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).<br>
# QGO-10b:
Source(s): [CivitAI](https://civitai.com/models/4188/qgo-10b)<br>
The biggest difference here has not been the merge, but just cutting down on negative prompt. This can have amazing results for realism, though you may run into things that are a little too real. Be warned ;)
It uses RPGv4 instead of v3. It also involves Latex to get some of the bondage model back in as well..
There may also be traces of the Middle Finger and Gun2Head Pose LORAs in the examples.
Advisable to use Hires.fix with the following (or similar) settings:
Upscaler: ESRGAN_4x (NMKD superscale can be a bit sharper, which is nice for smaller upscales)
Upscale by: 1.1~2.0 (whatever suits your purpose)
Denoising strength: 0.3 (or 0.5~0.7 if you don't mind changes from the base image)
You can of course generate low resolution versions first, and pick out the ones you enter into the upscale process (saves a lot of time). I really takes the amount of detail, especially of the face and eyes, to the next level.
This model aims for photorealism at higher resolutions, and large variation in poses, settings and genres. (And having a lot of fun filling half the prompt with wildcards).
I will still be tweaking it in the near future, to eliminate unwanted outputs and/or increase possibilities. Let me know if you run into anything that seems off.
Shout-out to the Unstable Diffusion Discord, where all the cool people share their gems in #photorealistic<br><br>




|
BigSalmon/SimplifyText
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.89 +/- 24.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigeS/DialoGPT-small-Rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| 2023-04-30T06:31:25Z
|
---
language:
- en
pipeline_tag: text-to-image
tags:
- not-for-all-audiences
---
|
Blabla/Pipipopo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- mn
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-ner-demo
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1687
- Precision: 0.8684
- Recall: 0.8891
- F1: 0.8786
- Accuracy: 0.9693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2009 | 1.0 | 572 | 0.1271 | 0.8074 | 0.8440 | 0.8253 | 0.9590 |
| 0.0951 | 2.0 | 1144 | 0.1069 | 0.8469 | 0.8768 | 0.8616 | 0.9671 |
| 0.063 | 3.0 | 1716 | 0.1136 | 0.8486 | 0.8783 | 0.8632 | 0.9680 |
| 0.0444 | 4.0 | 2288 | 0.1221 | 0.8506 | 0.8808 | 0.8654 | 0.9675 |
| 0.0303 | 5.0 | 2860 | 0.1389 | 0.8576 | 0.8823 | 0.8698 | 0.9677 |
| 0.0217 | 6.0 | 3432 | 0.1457 | 0.8683 | 0.8878 | 0.8779 | 0.9685 |
| 0.0157 | 7.0 | 4004 | 0.1542 | 0.8661 | 0.8873 | 0.8766 | 0.9692 |
| 0.0121 | 8.0 | 4576 | 0.1615 | 0.8730 | 0.8878 | 0.8803 | 0.9694 |
| 0.0094 | 9.0 | 5148 | 0.1675 | 0.8683 | 0.8883 | 0.8782 | 0.9688 |
| 0.0077 | 10.0 | 5720 | 0.1687 | 0.8684 | 0.8891 | 0.8786 | 0.9693 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Blaine-Mason/hackMIT-finetuned-sst2
|
[
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36
| 2023-04-30T06:56:08Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.01 +/- 6.15
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Apocalypse-19/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Bloodwarrior/Chikfalay
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
BlueGamerBeast/DialoGPT-small-joshua
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: keras
tags:
- NLP
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BobBraico/bert-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of LaMini-GPT-774M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
**Update: Tokenizer added**
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 'Lajonbot/LaMini-GPT-774M-19000-steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
```
|
Botjallu/DialoGPT-small-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details:
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Applied XORs & Quantization
This models has the XOR files pre-applied out of the box.
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Quantization was done using https://github.com/oobabooga/GPTQ-for-LLaMa for use in KoboldAI
Via the following command:
```
python llama.py ./TehVenom_Pygmalion-7b-Merged-Safetensors c4 --wbits 4 --act-order --save_safetensors Pygmalion-7B-GPTQ-4bit.act-order.safetensors
```
This is the best eval i could get after trying many argument combinations, by converting the model from bf16 to fp32, before quantizing down to 4bit with --act-order as the sole argument.
- Wikitext 2: 6.2477378845215
- PTB-New: 46.5129699707031
- C4-New: 7.8470954895020
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
BumBelDumBel/ZORK_AI_FANTASY
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-12
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1103 | 1.0 | 1 | 0.7162 |
| 2.4967 | 2.0 | 2 | 0.5556 |
| 2.2947 | 3.0 | 3 | 4.9203 |
| 1.8616 | 4.0 | 4 | 1.3486 |
| 1.2871 | 5.0 | 5 | 0.0291 |
| 1.6287 | 6.0 | 6 | 0.0654 |
| 1.6195 | 7.0 | 7 | 0.0732 |
| 1.5294 | 8.0 | 8 | 1.2985 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42
| null |
---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-sinhala-abstaractive-summarization_CNN-dailymail-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-sinhala-abstaractive-summarization_CNN-dailymail-V2
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the CNN daily-mail sinhala dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4863
- Rouge1: 19.9769
- Rouge2: 8.04
- Rougel: 19.0307
- Rougelsum: 19.7651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 1.8746 | 1.0 | 750 | 1.8262 | 18.9753 | 7.9271 | 18.1349 | 18.7152 |
| 1.4727 | 2.0 | 1500 | 1.8094 | 19.2219 | 7.9749 | 18.4314 | 18.9405 |
| 1.2331 | 3.0 | 2250 | 1.8432 | 20.436 | 7.8378 | 19.584 | 20.1613 |
| 1.0381 | 4.0 | 3000 | 1.8987 | 20.2251 | 7.9593 | 19.1556 | 19.9829 |
| 0.8737 | 5.0 | 3750 | 1.9471 | 20.3262 | 7.8935 | 19.407 | 20.0628 |
| 0.7363 | 6.0 | 4500 | 2.0611 | 20.1551 | 7.5046 | 19.2213 | 19.963 |
| 0.6214 | 7.0 | 5250 | 2.1838 | 19.9045 | 7.6232 | 18.743 | 19.5983 |
| 0.5277 | 8.0 | 6000 | 2.3190 | 20.8581 | 8.1054 | 19.8079 | 20.5414 |
| 0.4576 | 9.0 | 6750 | 2.4091 | 20.028 | 7.7635 | 19.0721 | 19.7053 |
| 0.4099 | 10.0 | 7500 | 2.4863 | 19.9769 | 8.04 | 19.0307 | 19.7651 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 71
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16
| null |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Benjo27/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 229
| null |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details:
This models has the XOR files pre-applied out of the box.
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/Metharme-7b
It has also been quantized down to 4Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py .\Metharme-7b-Merged-Safetensors c4 --wbits 4 --act-order --save_safetensors Metharme-7B-GPTQ-4bit.act-order.safetensors
```
This is the best eval i could get after trying many argument combinations, by converting the model from bf16 to fp32, before quantizing down to 4bit with `--act-order` as the sole argument.
- Wikitext 2: 6.2369050979614
- PTB-New: 47.5177230834960
- C4-New: 7.9044938087463
---
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21
| null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8404237430637297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- F1: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.571 | 1.0 | 191 | 0.3288 | 0.7826 |
| 0.2554 | 2.0 | 382 | 0.2857 | 0.8261 |
| 0.1688 | 3.0 | 573 | 0.2716 | 0.8404 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="brinkman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAUKiel/JavaBERT
|
[
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 388
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="brinkman/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CLAck/en-km
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/distilbert_model_kaggle_200_epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/distilbert_model_kaggle_200_epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1017
- Train Accuracy: 0.3545
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.1017 | 0.3545 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
CLAck/vi-en
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
CLEE/CLEE
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Anita_3 Dreambooth model trained by SiobhanS with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CLTL/MedRoBERTa.nl
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,988
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.93 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/icf-domains
|
[
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
] |
text-classification
|
{
"architectures": [
"RobertaForMultiLabelSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35
| null |
---
license: openrail
datasets:
- huolongguo10/insecure
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
# check_sec_tiny
检查web参数安全性,支持多种payload(v0.1.2-tiny)
## 类型
```
LABEL_0: secure
LABEL_1: insecure(可能包含payload)
```
## 使用
```python
import transformers
from transformers import BertTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('huolongguo10/check_sec')
model = AutoModelForSequenceClassification.from_pretrained('huolongguo10/check_sec', num_labels=2)
import torch
def check(text):
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(f'{logits.argmax().item()}:{text}')
return 'secure' if predicted_class_id==0 else 'insecure'
```
|
CLTL/icf-levels-att
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1426.01 +/- 139.91
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/icf-levels-mbw
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.11 +/- 2.65
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mgarciav/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CLTL/icf-levels-stm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
> Currently KoboldCPP is unable to stop inference when an EOS token is emitted, which causes the model to devolve into gibberish,
>
> Metharme 7B is now fixed on the dev branch of KoboldCPP, which has fixed the EOS issue. Make sure you're compiling the latest version, it was fixed only a after this model was released;
>
> When running KoboldCPP, you will need to add the --unbantokens flag for this model to behave properly.
## Model Details:
This models has the XOR files pre-applied out of the box.
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/Metharme-7b
It has also been quantized down to 4Bit - Q4_1 using the GGML library available here: https://github.com/LostRuins/koboldcpp
---
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
CTBC/ATS
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-30T10:24:19Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CZWin32768/xlm-align
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-par3-075sim-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-par3-075sim-shuffled
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3693
- Rouge1: 53.5765
- Rouge2: 34.6724
- Rougel: 50.4148
- Rougelsum: 50.8647
- Gen Len: 16.6961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.5506 | 1.0 | 5033 | 16.7189 | 1.4021 | 53.3112 | 34.3583 | 50.0466 | 50.5797 |
| 1.4764 | 2.0 | 10066 | 1.3751 | 53.522 | 34.5861 | 50.3606 | 50.8109 | 16.6965 |
| 1.4411 | 3.0 | 15099 | 1.3693 | 53.5765 | 34.6724 | 50.4148 | 50.8647 | 16.6961 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Calamarii/calamari
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: switch_high_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch_high_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CalvinHuang/mt5-small-finetuned-amazon-en-es
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
summarization
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16
| null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-Medical_Handwritten_Prescriptions_Information_Extraction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-Medical_Handwritten_Prescriptions_Information_Extraction
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cameron/BERT-SBIC-targetcategory
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| null |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of LaMini-Flan-T5-77M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
These repository does not contain an tokenizer object, at the moment (#TODO).
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 'Lajonbot/LaMini-Flan-T5-77M-Instruct-8000steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
```
|
Cameron/BERT-eec-emotion
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36
| null |
---
language:
- en
widget:
- text: "The functional protein contains 1160 << amino acids >> with a large central [[ mucin domain ]], three consensus sites for glycosaminoglycan attachment, two epidermal growth factor-like repeats, a putative hyaluronan-binding motif, and a potential transmembrane domain near the C-terminal."
example_title: "PART-OF"
- text: "<< Theophylline >> exposure resulted in a sustained increase in mRNA expression for CysS and [[ PDE3A ]], but PDE4D gene expression was unchanged."
example_title: "REG-POS"
- text: "These results suggested that << DMBT >> could inhibit invasion and angiogenesis by downregulation of [[ VEGF ]]and MMP-9, resulting from the inhibition of Akt pathway."
example_title: "REG-NEG"
- text: "Colonic cyclooxygenase-2 and << interkeukin-1beta >> mRNA and spinal c-FOS mRNA expression were significantly down-regulated by ATB-429, but not by [[ mesalamine ]]."
example_title: "NOT"
---
# Model Card
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [ChemProt corpus: BioCreative VI](https://biocreative.bioinformatics.udel.edu/news/corpora/chemprot-corpus-biocreative-vi/) dataset.
<!--
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Downstream Use [optional]
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
[More Information Needed]
### Training Procedure
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed]
#### Speeds, Sizes, Times [optional]
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
[More Information Needed]
#### Factors
[More Information Needed]
#### Metrics
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
[More Information Needed]
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
-->
|
Canadiancaleb/jessebot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper_med_ar_mixx3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_medium_ar_mix
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2331
- eval_wer: 19.7586
- eval_runtime: 9116.6074
- eval_samples_per_second: 1.145
- eval_steps_per_second: 0.573
- epoch: 1.25
- step: 1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Capreolus/bert-base-msmarco
|
[
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 238
| 2023-04-30T11:01:13Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: mbert-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-squadv2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9459 | 1.0 | 8145 | 0.9588 |
| 0.6985 | 2.0 | 16290 | 0.9959 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Capreolus/birch-bert-large-car_mb
|
[
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null |
{
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: switch_medium_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch_medium_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Capreolus/electra-base-msmarco
|
[
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 110
| null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -177.22 +/- 101.95
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'cfisicaro/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Carolhuehuehuehue/Sla
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# konverner/due_eshop_21
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("konverner/due_eshop_21")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
CasualHomie/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
metrics:
- rouge
model-index:
- name: wiki_lingua-cs-8-3-5.6e-05-mt5-small-finetuned
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wiki_lingua
type: wiki_lingua
config: cs
split: test
args: cs
metrics:
- name: Rouge1
type: rouge
value: 14.7738
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-cs-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6706
- Rouge1: 14.7738
- Rouge2: 4.1406
- Rougel: 13.0515
- Rougelsum: 14.3388
# Baseline LEAD-64
- Rouge1: 21.28
- Rouge2: 4.55
- Rougel: 12.97
- Rougelsum: 12.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 5.8493 | 1.0 | 609 | 2.7949 | 12.8232 | 3.3348 | 11.0312 | 12.4584 |
| 3.701 | 2.0 | 1218 | 2.6966 | 14.6541 | 4.0724 | 12.9018 | 14.2196 |
| 3.546 | 3.0 | 1827 | 2.6706 | 14.7738 | 4.1406 | 13.0515 | 14.3388 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
dccuchile/albert-base-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| null |
---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
dccuchile/albert-base-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# counterfeit-v3.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "counterfeit-v30"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/counterfeit-v30)
Credits: [View credits](https://civitai.com/?query=counterfeit-v3.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "counterfeit-v30",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
dccuchile/albert-large-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
dccuchile/albert-large-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# MeinaPastel API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "meinapastel"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/meinapastel)
Credits: [View credits](https://civitai.com/?query=MeinaPastel)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "meinapastel",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
dccuchile/albert-tiny-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.74 +/- 21.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26
| null |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
metrics:
- rouge
model-index:
- name: wiki_lingua-de-8-3-5.6e-05-mt5-small-finetuned
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wiki_lingua
type: wiki_lingua
config: de
split: test
args: de
metrics:
- name: Rouge1
type: rouge
value: 15.2299
language:
- de
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-de-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4218
- Rouge1: 15.2299
- Rouge2: 4.4912
- Rougel: 13.4991
- Rougelsum: 14.9193
# Baseline LEAD64
- Rouge1: 18.76
- Rouge2: 4.22
- Rougel: 12.14
- Rougelsum: 12.14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.5656 | 1.0 | 4939 | 2.5421 | 14.4738 | 4.064 | 12.7061 | 14.1813 |
| 2.9444 | 2.0 | 9878 | 2.4492 | 14.8349 | 4.3457 | 13.16 | 14.5623 |
| 2.8378 | 3.0 | 14817 | 2.4218 | 15.2299 | 4.4912 | 13.4991 | 14.9193 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24
| null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AnythingElse V4 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "anythingelse-v4"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/anythingelse-v4)
Credits: [View credits](https://civitai.com/?query=AnythingElse%20V4)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "anythingelse-v4",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26
| null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# night-sky-yozora-style-model API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "night-sky-yozora-sty"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/night-sky-yozora-sty)
Credits: [View credits](https://civitai.com/?query=night-sky-yozora-style-model)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "night-sky-yozora-sty",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
dccuchile/albert-large-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 75
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="thuyentruong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dccuchile/albert-tiny-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 393
| 2023-04-30T12:42:42Z
|
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-rgres/modelout
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.