modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
|---|---|---|---|---|---|---|
CleveGreen/JobClassifier_v2_gpt
|
[
"pytorch",
"gpt2",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_9_with_translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_9_with_translation
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3629
- Mse: 0.3629
- Mae: 0.4551
- R2: 0.1650
- Accuracy: 0.6333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| No log | 1.0 | 53 | 0.3421 | 0.3421 | 0.4573 | 0.2292 | 0.6167 |
| No log | 2.0 | 106 | 0.2617 | 0.2617 | 0.3888 | 0.4104 | 0.6667 |
| No log | 3.0 | 159 | 0.2117 | 0.2117 | 0.3422 | 0.5230 | 0.7667 |
| No log | 4.0 | 212 | 0.3250 | 0.3250 | 0.4990 | 0.2677 | 0.55 |
| No log | 5.0 | 265 | 0.2494 | 0.2494 | 0.3321 | 0.4380 | 0.7167 |
| No log | 6.0 | 318 | 0.2477 | 0.2477 | 0.3488 | 0.4419 | 0.75 |
| No log | 7.0 | 371 | 0.3209 | 0.3209 | 0.3599 | 0.2770 | 0.7833 |
| No log | 8.0 | 424 | 0.2704 | 0.2704 | 0.3715 | 0.3909 | 0.7 |
| No log | 9.0 | 477 | 0.2886 | 0.2886 | 0.3185 | 0.3498 | 0.7833 |
| 0.1507 | 10.0 | 530 | 0.2477 | 0.2477 | 0.3071 | 0.4418 | 0.7667 |
| 0.1507 | 11.0 | 583 | 0.2670 | 0.2670 | 0.3232 | 0.3984 | 0.7833 |
| 0.1507 | 12.0 | 636 | 0.2285 | 0.2285 | 0.2926 | 0.4851 | 0.75 |
| 0.1507 | 13.0 | 689 | 0.2378 | 0.2378 | 0.2980 | 0.4643 | 0.7833 |
| 0.1507 | 14.0 | 742 | 0.2544 | 0.2544 | 0.3194 | 0.4269 | 0.7667 |
| 0.1507 | 15.0 | 795 | 0.2571 | 0.2571 | 0.2904 | 0.4208 | 0.8 |
| 0.1507 | 16.0 | 848 | 0.2505 | 0.2505 | 0.2884 | 0.4357 | 0.8 |
| 0.1507 | 17.0 | 901 | 0.2654 | 0.2654 | 0.2846 | 0.4022 | 0.8 |
| 0.1507 | 18.0 | 954 | 0.2606 | 0.2606 | 0.2785 | 0.4128 | 0.8 |
| 0.0203 | 19.0 | 1007 | 0.2519 | 0.2519 | 0.2816 | 0.4324 | 0.8 |
| 0.0203 | 20.0 | 1060 | 0.2634 | 0.2634 | 0.2826 | 0.4065 | 0.8 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CodeDanCode/CartmenBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| 2023-04-02T06:31:53Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3814 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11442,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
CodeDanCode/SP-KyleBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15
| null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 549.00 +/- 155.75
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kambehmw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kambehmw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kambehmw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Venkatakrishnan-Ramesh/Text_gen
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertbase-uncased-2-actual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertbase-uncased-2-actual
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- Accuracy: 0.7490
- F1: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5205 | 1.0 | 20000 | 0.5390 | 0.7490 | 0.7431 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
CogComp/bart-faithful-summary-detector
|
[
"pytorch",
"jax",
"bart",
"text-classification",
"en",
"dataset:xsum",
"transformers",
"xsum",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BartForSequenceClassification"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": 1,
"max_length": 128,
"min_length": 12,
"no_repeat_ngram_size": null,
"num_beams": 4,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 234
| 2023-04-02T07:16:18Z
|
---
license: other
---
# 聲明 Disclaimer
本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。
The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit.
# 模型列表 List of Models
本資料夾中所有模型詳見下表。
All the models in this folder are detailed in the table below.
| 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link |
|----------------------|--------------------|--------------------|
|koreanDollLikeness_v10.safetensors |https://civitai.com/models/19356/koreandolllikenessv10 |https://civitai.com/api/download/models/22968 |
|koreanDollLikeness_v15.safetensors |https://civitai.com/models/24372/koreandolllikenessv15 |https://civitai.com/api/download/models/29136 |
|koreanDollLikeness_v20.safetensors |https://civitai.com/models/26124/koreandolllikeness-v20 |https://civitai.com/api/download/models/31284 |
|japaneseDollLikeness_v10.safetensors |https://civitai.com/models/19044/japanese-doll-likeness |https://civitai.com/api/download/models/22597 |
|japaneseDollLikeness_v15.safetensors |https://civitai.com/models/28811/japanesedolllikeness-v15|https://civitai.com/api/download/models/34562 |
|taiwanDollLikeness_v10.safetensors |https://civitai.com/models/17497/taiwan-doll-likeness |https://civitai.com/api/download/models/20684 |
|hongkongdolllikeness_v15.safetensors |https://civitai.com/models/17998/hongkongdolllikeness |https://civitai.com/api/download/models/22073 |
|chilloutmixss_v10.safetensors |https://civitai.com/models/10850/chilloutmixss |https://civitai.com/api/download/models/12876 |
|chilloutmixss_v20.safetensors |https://civitai.com/models/12843/chilloutmixss20 |https://civitai.com/api/download/models/15132 |
|chilloutmixss_v30.safetensors |https://civitai.com/models/16274/chilloutmixss30 |https://civitai.com/api/download/models/19219 |
|cuteGirlMix4_v10.safetensors |https://civitai.com/models/14171/cutegirlmix4 |https://civitai.com/api/download/models/16677 |
|eastasianDollLikeness_v5.safetensors |https://civitai.com/models/19495/eastasiandolllikeness |https://civitai.com/api/download/models/32382 |
|mikuya_v15.safetensors |https://civitai.com/models/8729?modelVersionId=11101 |https://civitai.com/api/download/models/11101 |
|mikuya_v10.safetensors |https://civitai.com/models/8729?modelVersionId=10299 |https://civitai.com/api/download/models/10299 |
|BreastInClass_V141.safetensors |https://civitai.com/models/9025?modelVersionId=23250 |https://civitai.com/api/download/models/23250 |
|BreastInClass_V14.safetensors |https://civitai.com/models/9025?modelVersionId=21077 |https://civitai.com/api/download/models/21077 |
|BreastInClass_V13.safetensors |https://civitai.com/models/9025?modelVersionId=13300 |https://civitai.com/api/download/models/13300 |
|BreastInClass_V12.safetensors |https://civitai.com/models/9025?modelVersionId=12041 |https://civitai.com/api/download/models/12041 |
|BreastInClass_V11.safetensors |https://civitai.com/models/9025?modelVersionId=10689 |https://civitai.com/api/download/models/10689 |
|BreastInClass_V10.safetensors |https://civitai.com/models/9025?modelVersionId=10666 |https://civitai.com/api/download/models/10666 |
|
CogComp/roberta-temporal-predictor
|
[
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| null |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- stable-diffusion
---
# Dash Stable Diffusion Mix
This is a custom merge model of Stable Diffusion 1.5 with focus on realism.
----
# Sample

**Prompt:**
```
polaroid photo by Wong Kar-Wai, cute punk [girl|woman], smiling, solo, messy [bob|disheveled] hair, blue eyes, red lipstick, outrun red jacket, eye bags, wind blowing hair, realistic, cinematic atmosphere, dramatic lighting, hard back light, rembrandt lighting, deep of field, bokeh, bloom, RAW color, high quality, best quality, masterpiece
Negative prompt: (worst quality, low quality:1.4), ugly, frame border, painting, asian, clown, empty background, choker, watermark, (short hair:0.1), nude, easynegative
```
**Input details:** Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2041577811, Size: 768x768, Model hash: 447be1b31a, Model: dash_sd_mix
|
CohleM/bert-nepali-tokenizer
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T07:18:18Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-thainew-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-thainew-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 3.8255 |
| No log | 2.0 | 36 | 3.0655 |
| No log | 3.0 | 54 | 3.1652 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CohleM/mbert-nepali-tokenizer
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ComCom/gpt2-large
|
[
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kpop-lisa-sks-10000 Dreambooth model trained by Thuong with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:









|
ComCom/gpt2-medium
|
[
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-02T07:30:45Z
|
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5634
---
# T5-base-finetuned-rte
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE RTE dataset. It acheives the following results on the validation set
- Accuracy: 0.7690
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"rte sentence1: " + rte_sent1 + "sentence 2: " + rte_sent2** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, target is choosen as **"entailment"** if label is 0, else label is **"not_entailment"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 3.0
### Training results
|Epoch | Training Loss | Validation Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.1099 | 0.7617 |
| 2 | 0.0573 | 0.7617 |
| 3 | 0.0276 | 0.7690 |
|
ComCom/gpt2
|
[
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- livedoor_news_corpus
model-index:
- name: t5-base-japanese-finetuned-livedoor_news_corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-japanese-finetuned-livedoor_news_corpus
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the livedoor_news_corpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
cometrain/neurotitle-rugpt3-small
|
[
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20
| 2023-04-02T07:37:59Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Nulaurev Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Connorvr/BrightBot-small
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| 2023-04-02T07:40:18Z
|
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-squad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 22.1
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 49.52
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 24.03
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 90.14
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 62.96
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-squad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en](https://huggingface.co/ckpts/mt5-small-trimmed-en) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en](https://huggingface.co/ckpts/mt5-small-trimmed-en)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 28.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 22.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 62.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.52 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Connorvr/TeachingGen
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| 2023-04-02T07:40:56Z
|
---
tags:
- conversational
---
# Genshin Impact Paimon DialoGPT Model
|
Contrastive-Tension/BERT-Base-CT-STSb
|
[
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-02T07:45:57Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-large-finetuned-augument-visquad2-2-4-2023-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-augument-visquad2-2-4-2023-3
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Best F1: 76.3263
- Loss: 2.9101
- Exact: 41.0887
- F1: 58.6813
- Total: 3821
- Hasans Exact: 56.0498
- Hasans F1: 81.3876
- Hasans Total: 2653
- Noans Exact: 7.1062
- Noans F1: 7.1062
- Noans Total: 1168
- Best Exact: 60.3769
- Best Exact Thresh: 0.7798
- Best F1 Thresh: 0.9874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Best F1 | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Noans Exact | Noans F1 | Noans Total | Best Exact | Best Exact Thresh | Best F1 Thresh |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:-----------:|:--------:|:-----------:|:----------:|:-----------------:|:--------------:|
| 0.9242 | 1.0 | 2807 | 69.6410 | 1.0239 | 37.3201 | 55.1119 | 3821 | 53.7505 | 79.3752 | 2653 | 0.0 | 0.0 | 1168 | 55.0118 | 0.8222 | 0.8968 |
| 0.3756 | 2.0 | 5615 | 73.7526 | 1.0092 | 38.8642 | 55.8953 | 3821 | 55.9744 | 80.5035 | 2653 | 0.0 | 0.0 | 1168 | 59.4085 | 0.9128 | 0.9611 |
| 0.2595 | 3.0 | 8423 | 75.1395 | 1.0121 | 39.7278 | 56.5553 | 3821 | 57.1806 | 81.4165 | 2653 | 0.0856 | 0.0856 | 1168 | 60.6386 | 0.8138 | 0.9174 |
| 0.185 | 4.0 | 11231 | 75.2011 | 1.2309 | 39.2306 | 56.7010 | 3821 | 56.2005 | 81.3625 | 2653 | 0.6849 | 0.6849 | 1168 | 59.7749 | 0.7215 | 0.8729 |
| 0.1336 | 5.0 | 14038 | 75.0330 | 1.4052 | 38.4454 | 56.1488 | 3821 | 55.2582 | 80.7556 | 2653 | 0.2568 | 0.2568 | 1168 | 59.4085 | 0.6660 | 0.8646 |
| 0.0976 | 6.0 | 16846 | 75.4976 | 1.6109 | 38.5763 | 56.1952 | 3821 | 55.4467 | 80.8224 | 2653 | 0.2568 | 0.2568 | 1168 | 59.8534 | 0.6631 | 0.9605 |
| 0.072 | 7.0 | 19654 | 76.0690 | 1.9673 | 39.5970 | 56.9041 | 3821 | 56.0874 | 81.0142 | 2653 | 2.1404 | 2.1404 | 1168 | 60.5862 | 0.7197 | 0.9882 |
| 0.0526 | 8.0 | 22462 | 75.3652 | 2.2945 | 38.8903 | 56.5382 | 3821 | 55.3336 | 80.7511 | 2653 | 1.5411 | 1.5411 | 1168 | 59.8273 | 0.6659 | 0.9573 |
| 0.0389 | 9.0 | 25269 | 76.0674 | 2.6609 | 42.5281 | 59.8494 | 3821 | 56.0121 | 80.9591 | 2653 | 11.9007 | 11.9007 | 1168 | 60.4292 | 0.6494 | 0.9632 |
| 0.0291 | 10.0 | 28070 | 76.3263 | 2.9101 | 41.0887 | 58.6813 | 3821 | 56.0498 | 81.3876 | 2653 | 7.1062 | 7.1062 | 1168 | 60.3769 | 0.7798 | 0.9874 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Base-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16
| 2023-04-19T09:11:21Z
|
---
datasets:
- Akajackson/donut_synthdog_rus
language:
- ru
- en
---
## Описание модели
Модель Donut (end-to-end transformer) для распознавания текстов на русском языке.
https://github.com/clovaai/donut
Для обучения сгенерирован датасет SynthDoG из 100тыс изображений, с текстами, взятыми из произведений русской литературы.
https://huggingface.co/datasets/Akajackson/donut_synthdog_rus
Модель обучена на ноутбуке от уважаемого NielsRogge с заменой оригинального токенайзера на DeepPavlov/xlm-roberta-large-en-ru на площадке Kaggle.
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb
Метрика на валидации Normed ED: 0.02239.
## Возможности модели
Данная модель является базовой для следующих задач:
* распознавание различных типов документов;
* ответы на вопросы по документу;
* классификация документов.
Для решения Вашей задачи возможно использовать выше упомянутые ноутбуки.
Датасет необходимо разметить в формате, который указан в репозитории Donut.
|
Contrastive-Tension/BERT-Base-Swe-CT-STSb
|
[
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 126
| null |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: juro95/xlm-roberta-finetuned-ner-recleaned_cased_0.5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juro95/xlm-roberta-finetuned-ner-recleaned_cased_0.5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0334
- Validation Loss: 0.0531
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35468, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2096 | 0.0941 | 0 |
| 0.0821 | 0.0652 | 1 |
| 0.0499 | 0.0554 | 2 |
| 0.0334 | 0.0531 | 3 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.6.5
- Datasets 2.3.2
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Distil-NLI-CT
|
[
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| 2023-04-02T07:56:43Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-clang8-e1-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-clang8-e1-b16
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3150
- Rouge1: 82.2657
- Rouge2: 76.3303
- Rougel: 81.8622
- Rougelsum: 81.9329
- Gen Len: 16.6232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.227 | 0.34 | 50000 | 0.3619 | 79.9421 | 73.5267 | 79.424 | 79.5292 | 16.1650 |
| 0.1658 | 0.68 | 100000 | 0.3150 | 82.2657 | 76.3303 | 81.8622 | 81.9329 | 16.6232 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.11.0a0+b6df043
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Large-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-02T07:58:58Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-newVersion_Jhon_Wick
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-newVersion_Jhon_Wick
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4886
- Rouge1: 48.6605
- Rouge2: 24.9693
- Rougel: 37.3383
- Rougelsum: 45.588
- Gen Len: 78.5668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9661 | 1.0 | 765 | 1.6090 | 45.3876 | 22.2762 | 34.7559 | 42.3201 | 76.2048 |
| 1.7525 | 2.0 | 1530 | 1.5620 | 46.6776 | 23.2287 | 35.6355 | 43.5005 | 79.2035 |
| 1.7231 | 3.0 | 2295 | 1.5360 | 47.5061 | 23.9061 | 36.2823 | 44.3393 | 78.8096 |
| 1.6819 | 4.0 | 3060 | 1.5188 | 47.9422 | 24.3479 | 36.7844 | 44.8047 | 78.6368 |
| 1.6704 | 5.0 | 3825 | 1.5086 | 48.2693 | 24.6015 | 36.9681 | 45.1561 | 78.3357 |
| 1.6481 | 6.0 | 4590 | 1.5003 | 48.4714 | 24.7449 | 37.1888 | 45.3465 | 77.8874 |
| 1.6505 | 7.0 | 5355 | 1.4954 | 48.4435 | 24.8279 | 37.2272 | 45.3858 | 77.9686 |
| 1.6331 | 8.0 | 6120 | 1.4914 | 48.5349 | 24.9022 | 37.2725 | 45.4888 | 78.1754 |
| 1.6274 | 9.0 | 6885 | 1.4892 | 48.6537 | 24.9567 | 37.3426 | 45.5884 | 78.1263 |
| 1.6215 | 10.0 | 7650 | 1.4886 | 48.6605 | 24.9693 | 37.3383 | 45.588 | 78.5668 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Large-NLI-CT
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15
| null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1388.84 +/- 217.63
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cooker/cicero-similis
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-random-stopvoicing-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-random-stopvoicing-1
This model is a fine-tuned version of [](https://huggingface.co/) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3669
- Accuracy: 0.8702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6871 | 0.99 | 20 | 0.6530 | 0.6213 |
| 0.6757 | 1.98 | 40 | 0.6507 | 0.6104 |
| 0.6193 | 2.96 | 60 | 0.4827 | 0.7691 |
| 0.5511 | 4.0 | 81 | 0.4494 | 0.7950 |
| 0.5076 | 4.99 | 101 | 0.4027 | 0.8283 |
| 0.4882 | 5.98 | 121 | 0.5145 | 0.7813 |
| 0.4728 | 6.96 | 141 | 0.4394 | 0.8120 |
| 0.4351 | 8.0 | 162 | 0.4163 | 0.8270 |
| 0.4432 | 8.99 | 182 | 0.3823 | 0.8392 |
| 0.4165 | 9.98 | 202 | 0.4307 | 0.8263 |
| 0.3947 | 10.96 | 222 | 0.3569 | 0.8604 |
| 0.4186 | 12.0 | 243 | 0.4431 | 0.8283 |
| 0.3948 | 12.99 | 263 | 0.3836 | 0.8522 |
| 0.3627 | 13.98 | 283 | 0.3778 | 0.8569 |
| 0.3922 | 14.96 | 303 | 0.3523 | 0.8624 |
| 0.3668 | 16.0 | 324 | 0.3543 | 0.8631 |
| 0.3676 | 16.99 | 344 | 0.3485 | 0.8610 |
| 0.3118 | 17.98 | 364 | 0.3838 | 0.8638 |
| 0.328 | 18.96 | 384 | 0.3509 | 0.8685 |
| 0.3387 | 20.0 | 405 | 0.3593 | 0.8685 |
| 0.3088 | 20.99 | 425 | 0.3596 | 0.8631 |
| 0.2942 | 21.98 | 445 | 0.3585 | 0.8713 |
| 0.3027 | 22.96 | 465 | 0.3644 | 0.8651 |
| 0.2913 | 23.7 | 480 | 0.3575 | 0.8692 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Coolhand/Abuela
|
[
"en",
"image_restoration",
"superresolution",
"license:mit"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST-2
type: glue
args: SST-2
metrics:
- name: Accuracy
type: accuracy
value: 0.9323
---
# T5-base-finetuned-sst2
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE SST-2 dataset. It acheives the following results on the validation set
- Accuracy: 0.9323
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"sst2 sentence: " + sst2_sent** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, label is choosen as **"positive"** if label is 1, else label is **"negative"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 2
-
### Training results
|Epoch | Training Loss | Validation Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.1045 | 0.9323 |
| 2 | 0.0539 | 0.9243 |
|
Corvus/DialoGPT-medium-CaptainPrice
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| 2023-04-02T08:15:25Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-BART-all-categories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-BART-all-categories
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
CouchCat/ma_ner_v7_distil
|
[
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| null |
Access to model ajipon/Ray is restricted and you are not in the authorized list. Visit https://huggingface.co/ajipon/Ray to ask for access.
|
Coyotl/DialoGPT-test2-arthurmorgan
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8567
---
# T5-base-finetuned-mnli
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE MNLI dataset. It acheives the following results on the **validation-matched** set
- Accuracy: 0.8567
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"mnli premise: " + mnli_premise + "hypothesis: " + mnli_hypothesis** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, target is choosen as **"entailment"** if label is 0, else it is **"neutral"** if label is 1, else it is **"contradiction"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 2
### Training results
|Epoch | Training Loss | Validation Matched Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.1661 | 0.8404 |
| 2 | 0.1016 | 0.8567 |
|
CracklesCreeper/Piglin-Talks-Harry-Potter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| 2023-04-02T08:34:22Z
|
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9123
---
# T5-base-finetuned-qqp
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE QQP dataset. It acheives the following results on the **validation** set
- Accuracy: 0.9123
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"qqp question1: " + qqp_question1 + "question2: " + qqp_question2** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, label is choosen as **"duplicate"** if label is 1, else label is **"not_duplicate"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 3
### Training results
|Epoch | Training Loss | Validation Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.0672 | 0.8888 |
| 2 | 0.0428 | 0.9082 |
| 3 | 0.0231 | 0.9123 |
|
Crasher222/kaggle-comp-test
|
[
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29
| 2023-04-02T08:44:38Z
|
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8922
---
# T5-base-finetuned-mrpc
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE MRPC dataset. It acheives the following results on the validation set
- Accuracy: 0.8922
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"mrpc sentence1: " + mrpc_sentence1 + "sentence 2: " + mrpc_sentence2** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, label is choosen as **"equivalent"** if label is 1, else label is **"not_equivalent"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 3.0
### Training results
|Epoch | Training Loss | Validation Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.1925 | 0.8799 |
| 2 | 0.0767 | 0.8922 |
| 3 | 0.0251 | 0.8922 |
|
CrayonShinchan/bart_fine_tune_test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
**Train-Test Set:** "teknofest_train_final.csv"
**Model:** "dbmdz/bert-base-turkish-128k-uncased"
**Önişleme**
- Büyük karakterler öncesine special token (#) eklenip sonrasında karakterler küçültülmüştür
- Noktalama işaretleri silinmiştir
## Tokenizer Parametreleri
```
max_length=64
padding=True
truncation=True
```
## Eğitim Parametreleri
- **Epoch:** 3
- **Learning Rate:** 7e-5
- **Batch-Size:** 64
- **Tokenizer Length:** 64
- **Loss:** BCE
- **Online Hard Example Mining:** Açık
- **Class-Weighting:** Açık (^0.3)
- **Early Stopping:** Kapalı
- **Stratified Batch Sampling:** Açık
- **Gradient Accumulation:** Kapalı
- **LR Scheduler:** Cosine-with-Warmup
- **Warmup Ratio:** 0.1
- **Weight Decay:** 0.01
- **LLRD:** 0.95
- **Label Smoothing:** 0.05
- **Gradient Clipping:** 1.0
- **MLM Pre-Training:** Kapalı
## CV10 Sonuçları
```
precision recall f1-score support
INSULT 0.9172 0.9260 0.9216 2393
OTHER 0.9681 0.9646 0.9663 3528
PROFANITY 0.9627 0.9571 0.9599 2376
RACIST 0.9684 0.9651 0.9667 2033
SEXIST 0.9618 0.9668 0.9643 2081
accuracy 0.9562 12411
macro avg 0.9557 0.9559 0.9558 12411
weighted avg 0.9563 0.9562 0.9562 12411
```
|
CrayonShinchan/fine_tune_try_1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T08:46:20Z
|
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [zhihan1996/DNA_bert_6](https://huggingface.co/zhihan1996/DNA_bert_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Accuracy: 0.8492
- Precision: 0.8807
- Recall: 0.8283
- F1: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4859 | 0.41 | 500 | 0.4180 | 0.8001 | 0.8552 | 0.7506 | 0.7995 |
| 0.3937 | 0.81 | 1000 | 0.4044 | 0.8217 | 0.8871 | 0.7612 | 0.8193 |
| 0.3426 | 1.22 | 1500 | 0.3740 | 0.8340 | 0.8765 | 0.8 | 0.8365 |
| 0.3068 | 1.63 | 2000 | 0.3839 | 0.8398 | 0.8808 | 0.8077 | 0.8426 |
| 0.2757 | 2.04 | 2500 | 0.4260 | 0.8386 | 0.8181 | 0.8950 | 0.8548 |
| 0.2211 | 2.44 | 3000 | 0.3966 | 0.8492 | 0.8807 | 0.8283 | 0.8537 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
Crisblair/Wkwk
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T08:50:36Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 623.50 +/- 221.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vcncolin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vcncolin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vcncolin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Crispy/dialopt-small-kratos
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
language:
- ru
tags:
- PyTorch
- Transformers
---
# BERT base model for pair ranking (reward model for RLHF) in Russian language.
For training i use the next [pair-ranking-loss](https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html)
Model based on [ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base)
Datasets have been translated with google-translate-api for reward training:
- [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Dahoas/synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
- [openai/webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
Firstly download custom model localy. You can do it manualy.
OR:
- git lfs install;
- git clone https://huggingface.co/Andrilko/ruBert-base-reward
OR look at [this manual](https://huggingface.co/docs/hub/models-downloading)
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute score:
```python
#Use custom model class:
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModel, AdamW, BertModel
class RewardModel(nn.Module):
def __init__(self, model_name):
super(RewardModel, self).__init__()
self.checkpoint = model_name
self.bert = AutoModel.from_pretrained(model_name,
return_dict=False)
self.layer_norm = nn.LayerNorm(768)
self.dropout = nn.Dropout(0.3)
self.dense = nn.Sequential(
nn.Linear(768, 512),
nn.LeakyReLU(negative_slope=0.01),
nn.Dropout(0.3),
nn.Linear(512, 1),
nn.Sigmoid()
)
def forward(self, input_ids, token_type_ids, attention_mask):
model_output = self.bert(input_ids=input_ids,
token_type_ids = token_type_ids,
attention_mask=attention_mask)
last_hidden_states = model_output[0]
pooled_output = last_hidden_states[:,0]
pooled_output = self.layer_norm(pooled_output)
pooled_output = self.dropout(pooled_output)
preds = self.dense(pooled_output)
return preds
#Create model object and init pretrain weights:
reward_name = "ai-forever/ruBert-base"
tokenizer=AutoTokenizer.from_pretrained(reward_name)
model = RewardModel(reward_name)
model.load_state_dict(torch.load('./ruBert-base-reward/pytorch_model.bin'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#Sentences that we want to score:
sentences = ['Человек: Что такое QR-код?', 'Ассистент: QR-код - это тип матричного штрих-кода.']
#Compute reward score:
with torch.no_grad():
model.to(device)
encoded_input = tokenizer(sentences[0],sentences[1],
truncation=True,
add_special_tokens=True,
max_length=512,
padding='max_length',
return_tensors='pt')
encoded_input = encoded_input.to(device)
score = model(**encoded_input).cpu().flatten().numpy()
print(score)
```
# Authors
+ Aleksandr Abramov: [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko);
|
DSI/personal_sentiment
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| 2023-04-02T10:51:21Z
|
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: gpt2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.54930
---
# gpt2-finetuned-wnli
<!-- Provide a quick summary of what the model is/does. -->
This model is GPT-2 fine-tuned on GLUE STS-B dataset. It acheives the following results on the validation set
- Accuracy: 0.54930
## Model Details
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
However, it acheives very good results on Text Classification tasks.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 16
- eval_batch_size: 16
- seed: 123
- optimizer: epsilon=1e-08
- num_epochs: 3
### Training results
|Epoch | Training Loss | Training Accuracy | Validation Loss | Validation Accuracy |
|:----:|:-------------:|:-----------------:|:---------------:|:-------------------:|
| 1 | 0.72133 | 0.49449 | 0.67626 | 0.50704 |
| 2 | 0.71982 | 0.50866 | 0.70278 | 0.49296 |
| 3 | 0.70411 | 0.51181 | 0.68919 | **0.54930** |
|
alexandrainst/da-hatespeech-detection-base
|
[
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,719
| null |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- billster45/autotrain-data-imdb-sentiment
co2_eq_emissions:
emissions: 1.6951829788409294
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 45954114684
- CO2 Emissions (in grams): 1.6952
## Validation Metrics
- Loss: 0.156
- Accuracy: 0.953
- Precision: 0.951
- Recall: 0.957
- AUC: 0.989
- F1: 0.954
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/billster45/autotrain-imdb-sentiment-45954114684
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("billster45/autotrain-imdb-sentiment-45954114684", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("billster45/autotrain-imdb-sentiment-45954114684", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Daivakai/DialoGPT-small-saitama
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: news-summarization-argilla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news-summarization-argilla
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6405
- Rouge1: 0.2882
- Rouge2: 0.0847
- Rougel: 0.2411
- Rougelsum: 0.2412
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 4.0721 | 0.2503 | 0.0688 | 0.2149 | 0.2162 | 19.0 |
| No log | 2.0 | 24 | 3.8238 | 0.269 | 0.0756 | 0.2266 | 0.2281 | 19.0 |
| No log | 3.0 | 36 | 3.6874 | 0.283 | 0.0846 | 0.2387 | 0.2388 | 19.0 |
| No log | 4.0 | 48 | 3.6405 | 0.2882 | 0.0847 | 0.2411 | 0.2412 | 19.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Darkrider/covidbert_medmarco
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2010.05987",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35
| 2023-04-02T12:02:53Z
|
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: juro95/xlm-roberta-finetuned-ner-recleaned_cased_0.3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juro95/xlm-roberta-finetuned-ner-recleaned_cased_0.3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0256
- Validation Loss: 0.0420
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 73876, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1646 | 0.0799 | 0 |
| 0.0646 | 0.0527 | 1 |
| 0.0389 | 0.0435 | 2 |
| 0.0256 | 0.0420 | 3 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.6.5
- Datasets 2.3.2
- Tokenizers 0.13.2
|
Declan/CNN_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Declan/ChicagoTribune_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.29 +/- 18.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/FoxNews_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: prueba5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prueba5
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2442
- Precision: 0.5258
- Recall: 0.5574
- F1: 0.5411
- Accuracy: 0.9609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.75e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 57 | 0.2341 | 0.0 | 0.0 | 0.0 | 0.9488 |
| No log | 2.0 | 114 | 0.2411 | 0.0 | 0.0 | 0.0 | 0.9488 |
| No log | 3.0 | 171 | 0.2150 | 0.0385 | 0.0055 | 0.0096 | 0.9410 |
| No log | 4.0 | 228 | 0.1885 | 0.25 | 0.0929 | 0.1355 | 0.9500 |
| No log | 5.0 | 285 | 0.1730 | 0.3830 | 0.1967 | 0.2599 | 0.9524 |
| No log | 6.0 | 342 | 0.1591 | 0.5098 | 0.2842 | 0.3649 | 0.9581 |
| No log | 7.0 | 399 | 0.1665 | 0.5405 | 0.3279 | 0.4082 | 0.9609 |
| No log | 8.0 | 456 | 0.1856 | 0.5294 | 0.4918 | 0.5099 | 0.9604 |
| 0.1706 | 9.0 | 513 | 0.1727 | 0.5 | 0.5191 | 0.5094 | 0.9611 |
| 0.1706 | 10.0 | 570 | 0.1717 | 0.5669 | 0.4863 | 0.5235 | 0.9639 |
| 0.1706 | 11.0 | 627 | 0.1913 | 0.5024 | 0.5628 | 0.5309 | 0.9601 |
| 0.1706 | 12.0 | 684 | 0.1793 | 0.515 | 0.5628 | 0.5379 | 0.9619 |
| 0.1706 | 13.0 | 741 | 0.2009 | 0.5679 | 0.5027 | 0.5333 | 0.9618 |
| 0.1706 | 14.0 | 798 | 0.2043 | 0.5333 | 0.5683 | 0.5503 | 0.9604 |
| 0.1706 | 15.0 | 855 | 0.2052 | 0.5486 | 0.5246 | 0.5363 | 0.9629 |
| 0.1706 | 16.0 | 912 | 0.2234 | 0.5183 | 0.5410 | 0.5294 | 0.9581 |
| 0.1706 | 17.0 | 969 | 0.2157 | 0.5424 | 0.5246 | 0.5333 | 0.9616 |
| 0.0202 | 18.0 | 1026 | 0.2207 | 0.5025 | 0.5574 | 0.5285 | 0.9596 |
| 0.0202 | 19.0 | 1083 | 0.2297 | 0.5025 | 0.5410 | 0.5211 | 0.9573 |
| 0.0202 | 20.0 | 1140 | 0.2264 | 0.5131 | 0.5355 | 0.5241 | 0.9593 |
| 0.0202 | 21.0 | 1197 | 0.2300 | 0.5181 | 0.5464 | 0.5319 | 0.9593 |
| 0.0202 | 22.0 | 1254 | 0.2348 | 0.5241 | 0.5355 | 0.5297 | 0.9604 |
| 0.0202 | 23.0 | 1311 | 0.2372 | 0.5196 | 0.5792 | 0.5478 | 0.9588 |
| 0.0202 | 24.0 | 1368 | 0.2349 | 0.5319 | 0.5464 | 0.5391 | 0.9613 |
| 0.0202 | 25.0 | 1425 | 0.2353 | 0.5312 | 0.5574 | 0.544 | 0.9619 |
| 0.0202 | 26.0 | 1482 | 0.2388 | 0.5489 | 0.5519 | 0.5504 | 0.9614 |
| 0.0044 | 27.0 | 1539 | 0.2396 | 0.5243 | 0.5301 | 0.5272 | 0.9618 |
| 0.0044 | 28.0 | 1596 | 0.2442 | 0.5152 | 0.5574 | 0.5354 | 0.9603 |
| 0.0044 | 29.0 | 1653 | 0.2444 | 0.5178 | 0.5574 | 0.5368 | 0.9604 |
| 0.0044 | 30.0 | 1710 | 0.2442 | 0.5258 | 0.5574 | 0.5411 | 0.9609 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Declan/FoxNews_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: openrail
---
# Kaiyo Mixes
I'm new to using hugging face so this will act as a repository for some of my merged models.
Attached is the Notion page where I document my recipes for each model and some example images.
https://kaiyo.notion.site/Personal-Models-f5c0aff01eab48869699b958a66e4501
Please note that these images should not be used for commercial purposes
and the models should not be redistributed and sold for monetary gain.
Thanks for showing an interest in these merges!
- Kaiyo
|
Declan/HuffPost_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkish-rte-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkish-rte-2
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7020
- Validation Loss: 0.6937
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7029 | 0.6953 | 0 |
| 0.7032 | 0.6998 | 1 |
| 0.7010 | 0.6923 | 2 |
| 0.6984 | 0.6917 | 3 |
| 0.7020 | 0.6937 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Declan/NPR_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: Shivraj8615/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Declan/NPR_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: unknown
---
Model generated by Diffusers Fine-tuning Example at https://huggingface.co/docs/diffusers/training/text2image
|
Declan/NewYorkPost_model_v1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Declan/Reuters_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-02T16:32:33Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-indobert-base-uncased-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
- Accuracy: 0.7629
- F1: 0.7622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1098 | 0.5 | 40 | 1.0813 | 0.4110 | 0.4037 |
| 1.0991 | 0.99 | 80 | 0.9440 | 0.5653 | 0.5613 |
| 1.0022 | 1.49 | 120 | 0.8605 | 0.6249 | 0.6215 |
| 0.876 | 1.98 | 160 | 0.7910 | 0.6582 | 0.6563 |
| 0.7978 | 2.48 | 200 | 0.7613 | 0.6800 | 0.6777 |
| 0.7978 | 2.97 | 240 | 0.7216 | 0.7005 | 0.7020 |
| 0.7667 | 3.47 | 280 | 0.6940 | 0.7178 | 0.7179 |
| 0.7091 | 3.96 | 320 | 0.6762 | 0.7310 | 0.7309 |
| 0.6752 | 4.46 | 360 | 0.6569 | 0.7424 | 0.7413 |
| 0.6425 | 4.95 | 400 | 0.6440 | 0.7610 | 0.7618 |
| 0.6425 | 5.45 | 440 | 0.6302 | 0.7619 | 0.7618 |
| 0.6153 | 5.94 | 480 | 0.6266 | 0.7615 | 0.7613 |
| 0.5945 | 6.44 | 520 | 0.6291 | 0.7638 | 0.7634 |
| 0.5587 | 6.93 | 560 | 0.6222 | 0.7606 | 0.7593 |
| 0.5452 | 7.43 | 600 | 0.6212 | 0.7633 | 0.7631 |
| 0.5452 | 7.93 | 640 | 0.6185 | 0.7629 | 0.7622 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Declan/Reuters_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.76 +/- 0.83
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/Reuters_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-indobert-base-uncased-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5551
- Accuracy: 0.8070
- F1: 0.8076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6592 | 0.5 | 1533 | 0.5988 | 0.7564 | 0.7573 |
| 0.5938 | 1.0 | 3066 | 0.5563 | 0.7806 | 0.7816 |
| 0.5258 | 1.5 | 4599 | 0.5301 | 0.7918 | 0.7919 |
| 0.5276 | 2.0 | 6132 | 0.5165 | 0.7959 | 0.7952 |
| 0.4947 | 2.5 | 7665 | 0.5346 | 0.7957 | 0.7967 |
| 0.4967 | 3.0 | 9198 | 0.5061 | 0.8066 | 0.8071 |
| 0.4311 | 3.5 | 10731 | 0.5171 | 0.8038 | 0.8039 |
| 0.4436 | 4.0 | 12264 | 0.5064 | 0.8078 | 0.8087 |
| 0.4174 | 4.5 | 13797 | 0.5220 | 0.8076 | 0.8080 |
| 0.414 | 5.0 | 15330 | 0.5166 | 0.8093 | 0.8094 |
| 0.3726 | 5.5 | 16863 | 0.5359 | 0.8083 | 0.8089 |
| 0.3974 | 6.0 | 18396 | 0.5292 | 0.8059 | 0.8063 |
| 0.3452 | 6.5 | 19929 | 0.5551 | 0.8070 | 0.8076 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Declan/test_model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
language:
- en
tags:
- NoSleep
- Reddit
- Story
- Horror
widget:
- text: "[WP] \""
example_title: "[WP] "
datasets:
- chloeliu/reddit_nosleep_posts
---
# "NoSleep" Writing Prompt Generator
Finetuned version of [GPT2](https://huggingface.co/gpt2) to facilitate generation of Writing Prompts for the [GPT-NoSleep-355m model](https://huggingface.co/DarwinAnim8or/GPT-NoSleep-355m)
You can use the space linked on the right to use this model, then use the NoSleep model in tandem to generate stories!
# Training Procedure
This was trained on the 'reddt-nosleep-posts' dataset, using the "HappyTransformers" library on Google Colab.
This model was trained for X epochs with learning rate 1e-2.
# Biases & Limitations
This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the dataset.
It likely will generate offensive output.
# Intended Use
This model is meant for fun, nothing else.
# Sample Use
```python
from happytransformer import GENSettings
args_top_k = GENSettings(no_repeat_ngram_size=1, do_sample=True, top_k=80, temperature=0.4, max_length=25, early_stopping=True)
result = happy_gen.generate_text("[WP] \"", args=args_top_k)
print(result.text)
```
|
Declan/test_push
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
# Vocabulary Trimmed [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25): `vocabtrimmer/mbart-large-cc25-trimmed-ja`
This model is a trimmed version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/mbart-large-cc25 | vocabtrimmer/mbart-large-cc25-trimmed-ja |
|:---------------------------|:----------------------------|:-------------------------------------------|
| parameter_size_full | 610,851,840 | 434,447,360 |
| parameter_size_embedding | 512,055,296 | 159,246,336 |
| vocab_size | 250,027 | 77,757 |
| compression_rate_full | 100.0 | 71.12 |
| compression_rate_embedding | 100.0 | 31.1 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ja | vocabtrimmer/mc4_validation | text | ja | validation | | 2 |
|
DeepBasak/Slack
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-xlm-roberta-large-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4945
- Accuracy: 0.8553
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4916 | 0.5 | 1533 | 0.4336 | 0.8335 | 0.8342 |
| 0.4465 | 1.0 | 3066 | 0.4120 | 0.8454 | 0.8463 |
| 0.3666 | 1.5 | 4599 | 0.4001 | 0.8537 | 0.8538 |
| 0.3876 | 2.0 | 6132 | 0.3928 | 0.8530 | 0.8528 |
| 0.3347 | 2.5 | 7665 | 0.4415 | 0.8502 | 0.8505 |
| 0.3372 | 3.0 | 9198 | 0.4174 | 0.8582 | 0.8583 |
| 0.2641 | 3.5 | 10731 | 0.4568 | 0.8532 | 0.8529 |
| 0.2747 | 4.0 | 12264 | 0.4262 | 0.8576 | 0.8577 |
| 0.231 | 4.5 | 13797 | 0.4945 | 0.8553 | 0.8555 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
DeepChem/ChemBERTa-10M-MLM
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 90
| null |
---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_STTS2_folk_normal_orth
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9379513783
---
## de_STTS2_folk_normal_orth tagger
This is a spaCy language model trained to use the Stuttgart-Tübingen Tagset version 2.0, which was designed to tag transcripts of conversational speech in German.
The model may be useful for tagging ASR transcripts such as those collected in the [CoGS](https://cc.oulu.fi/~scoats/CoGS.html) corpus.
The model was trained using the tag annotations from the FOLK corpus at https://agd.ids-mannheim.de/folk-gold.shtml, employing an 80/20 training/test split. This version of the tagger was trained using data in standard German orthography with regards to upper and lower case of characters.
Usage example:
```python
!pip install https://huggingface.co/stcoats/de_STTS2_folk_normal_orth/resolve/main/de_STTS2_folk_normal_orth-any-py3-none-any.whl
import spacy
import de_STTS2_folk_normal_orth
nlp = de_STTS2_folk_normal_orth.load()
doc = nlp("ach so meinst du wir sollen es jetzt tun")
for token in doc:
print(token.text, token.tag_)
```
### References
Coats, Steven. (In review).
Westpfahl, Swantje and Thomas Schmidt. (2016): [FOLK-Gold – A GOLD standard for Part-of-Speech-Tagging of Spoken German](https://aclanthology.org/L16-1237). In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia. Paris: European Language Resources Association (ELRA), pp. 1493-1499.
---
| Feature | Description |
| --- | --- |
| **Name** | `de_STTS2_folk_normal_orth` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger` |
| **Components** | `tok2vec`, `tagger` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (62 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$.`, `AB`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NGAKW`, `NGHES`, `NGIRR`, `NGONO`, `NN`, `ORD`, `PDAT`, `PDS`, `PIAT`, `PIDAT`, `PIDS`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PTKA`, `PTKIFG`, `PTKMA`, `PTKMWL`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `SEDM`, `SEQU`, `SPELL`, `TRUNC`, `UI`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 93.80 |
| `TOK2VEC_LOSS` | 204127.79 |
| `TAGGER_LOSS` | 119369.65 |
|
DeepChem/ChemBERTa-10M-MTR
|
[
"pytorch",
"roberta",
"arxiv:1910.09700",
"transformers"
] | null |
{
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 708
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Augmented-with-xlm-roberta-large-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4709
- Accuracy: 0.8563
- F1: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4755 | 0.5 | 1574 | 0.4331 | 0.8360 | 0.8358 |
| 0.4397 | 1.0 | 3148 | 0.3990 | 0.8489 | 0.8492 |
| 0.3992 | 1.5 | 4722 | 0.4178 | 0.8469 | 0.8478 |
| 0.3825 | 2.0 | 6296 | 0.3918 | 0.8552 | 0.8552 |
| 0.334 | 2.5 | 7870 | 0.4159 | 0.8535 | 0.8537 |
| 0.3159 | 3.0 | 9444 | 0.4048 | 0.8613 | 0.8611 |
| 0.2738 | 3.5 | 11018 | 0.4437 | 0.8552 | 0.8555 |
| 0.2758 | 4.0 | 12592 | 0.4381 | 0.8538 | 0.8542 |
| 0.2311 | 4.5 | 14166 | 0.4709 | 0.8563 | 0.8567 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
DeepChem/ChemBERTa-77M-MTR
|
[
"pytorch",
"roberta",
"transformers"
] | null |
{
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7,169
| null |
# Vocabulary Trimmed [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25): `vocabtrimmer/mbart-large-cc25-trimmed-ko`
This model is a trimmed version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/mbart-large-cc25 | vocabtrimmer/mbart-large-cc25-trimmed-ko |
|:---------------------------|:----------------------------|:-------------------------------------------|
| parameter_size_full | 610,851,840 | 402,585,600 |
| parameter_size_embedding | 512,055,296 | 95,522,816 |
| vocab_size | 250,027 | 46,642 |
| compression_rate_full | 100.0 | 65.91 |
| compression_rate_embedding | 100.0 | 18.65 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
|
DeepChem/SmilesTokenizer_PubChem_1M
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 227
| 2023-04-05T14:45:21Z
|
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
---
# 10 Plus Beautiful Women
danbooru.donmai.us/posts?tags=10_plus
v1 - 20 Images / 2000 Steps
- Basic Filewords
- 40% CamelliaMix NSFW v1.1
- 30% 3moon Anime Line
- 30% NAI (animefull-final)
v2 - 21 Images / 2100 Steps
- Basic Filewords
- 45.5% 3moon Anime Line
- 24.5% NabiMix
- 19.5% ntc's Simple
- 10.5% CamelliaMix NSFW v1.1
triggers "beautiful woman", "tp"
---
# Anabone Beautiful Women
danbooru.donmai.us/posts?tags=anabone
80 Images / 8000 Steps
- Basic Filewords
- 60% CamelliaMix NSFW v1.1
- 32% Pastelmarker
- 8% ntc's Simple
triggers "beautiful woman", "ab"
---
# Piyodesu Beautiful Women
danbooru.donmai.us/posts?tags=piyodesu
v1.0 - 27 Images / 2700 Steps
- Basic Filewords
- NAI (animefull-final)
v2.0 - 25 Images / 2500 Steps
- Basic Filewords
- CamelliaMix NSFW v1.1
v3.0 - 58 Images / 5800 Steps
- Basic Filewords
- 60% CamelliaMix NSFW v1.1
- 40% NAI (animefull-final)
v3.1 - 57 Images / 5700 Steps
- Basic Filewords
- 60% CamelliaMix NSFW v1.1
- 40% NAI (animefull-final)
v3.5 - 57 Images / 5700 Steps
- Basic Filewords
- 70% Piyodesu v3.1
- 30% Anything v5
DBv - 92 Images / 9200 Steps
- Deepbooru Tags
- 40% CamelliaMix NSFW v1.1
- 30% NabiMix
- 30% Anything v5
triggers v1.0-3.0 "beautiful woman", "pd", "upskirt", "from behind", "vaginal beauty"
triggers v3.1-3.5 "beautiful woman", "pd", "upskirt", "from behind", "nude", "umbrella"
triggers DBv "1girl" "pd"
---
# Piyodesu Aderet Beautiful Women
Piyodesu merged with Aderet trained by nProtec
civitai.com/models/28165/aderet-from-saving-80000-gold-coins
v1.0 - 70% Piyodesu v3.0, 30% Aderet
v1.1 - 70% Piyodesu v3.1, 30% Aderet
v1.5 - 70% Piyodesu v3.5, 30% Aderet
DBv - 70% Piyodesu DBv, 30% Aderet
triggers same as piyodesu + "blue eyes", "white hair"
*nProtec_Merge = reverse merge (70-80% Aderet)
---
# Punkodesu..
Piyodesu merged with Punk Women
0.7 Piyodesu v3.5 + 0.5 Punk Woman v2
---
# Tomato Rice Beautiful Women
danbooru.donmai.us/posts?tags=tomato_rice
v1 - 65 Images / 6500 Steps
- Basic Filewords
- CamelliaMix NSFW v1.1
v2 - 65 Images / 6500 Steps
- Basic Filewords
- 70% Anything v5
- 15% CamelliaMix NSFW v1.1
- 15% NAI (animefull-final)
DBv - 70 Images / 7000 Steps
- Deepbooru Tags
- 40% CamelliaMix NSFW v1.1
- 30% NabiMix
- 30% Anything v5
triggers v1-2 "beautiful woman", "tr", "with horns", "topless", "tit wank"
triggers DBv "1girl" "tr"
---
# WDS
44 Images / 4400 Steps
- Basic Filewords
- NAI (animefull-final)
trigger "woman dog sex wds"
---
# Be Careful!
these models are not intended for commercial use
if you do so you might be infringing copyrights and breaking the law
please use them responsibly
---
civitai.com/user/Powidl43
|
DeepESP/gpt2-spanish-medium
|
[
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 340
| null |
Access to model lvelho/sd-lil-model-lora is restricted and you are not in the authorized list. Visit https://huggingface.co/lvelho/sd-lil-model-lora to ask for access.
|
DeepESP/gpt2-spanish
|
[
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,463
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.65 +/- 0.79
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DeepPavlov/bert-base-cased-conversational
|
[
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3,009
| null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 54.26 +/- 75.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'nikgeo/LunarLanderPPO'
'batch_size': 512
'minibatch_size': 128}
```
|
DeepPavlov/rubert-base-cased-conversational
|
[
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"transformers",
"has_space"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17,362
| null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/27466/kanzaki-kaori-toaru-majutsu-no-index
|
DeltaHub/adapter_t5-3b_mrpc
|
[
"pytorch",
"transformers"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_wl_custom_architecture_150_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_wl_custom_architecture_150_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 24.8046 | 0.19 | 500 | 12.7123 |
| 11.0129 | 0.39 | 1000 | 9.6736 |
| 8.9635 | 0.58 | 1500 | 8.4964 |
| 8.0902 | 0.77 | 2000 | 7.8951 |
| 7.9548 | 0.97 | 2500 | 7.8226 |
| 7.6509 | 1.16 | 3000 | 7.7462 |
| 7.6147 | 1.36 | 3500 | 7.6485 |
| 7.5591 | 1.55 | 4000 | 7.6231 |
| 7.5172 | 1.74 | 4500 | 7.6365 |
| 7.4983 | 1.94 | 5000 | 7.4450 |
| 7.4245 | 2.13 | 5500 | 7.4577 |
| 7.2719 | 2.32 | 6000 | 7.3436 |
| 7.3124 | 2.52 | 6500 | 7.3705 |
| 7.2521 | 2.71 | 7000 | 7.3035 |
| 7.2334 | 2.9 | 7500 | 7.3254 |
| 7.2194 | 3.1 | 8000 | 7.2225 |
| 7.1485 | 3.29 | 8500 | 7.1902 |
| 7.1457 | 3.49 | 9000 | 7.2074 |
| 7.0741 | 3.68 | 9500 | 7.1499 |
| 7.0648 | 3.87 | 10000 | 7.1375 |
| 7.1039 | 4.07 | 10500 | 7.0124 |
| 7.063 | 4.26 | 11000 | 7.0609 |
| 7.0149 | 4.45 | 11500 | 7.0481 |
| 6.9925 | 4.65 | 12000 | 6.9921 |
| 7.007 | 4.84 | 12500 | 6.9332 |
| 6.9724 | 5.03 | 13000 | 6.9564 |
| 6.9151 | 5.23 | 13500 | 6.9191 |
| 6.9024 | 5.42 | 14000 | 6.9580 |
| 6.9217 | 5.62 | 14500 | 6.9994 |
| 6.8691 | 5.81 | 15000 | 6.8627 |
| 6.9037 | 6.0 | 15500 | 6.9464 |
| 6.9068 | 6.2 | 16000 | 6.8337 |
| 6.8132 | 6.39 | 16500 | 6.9507 |
| 6.879 | 6.58 | 17000 | 6.8269 |
| 6.8611 | 6.78 | 17500 | 6.8231 |
| 6.832 | 6.97 | 18000 | 6.8648 |
| 6.888 | 7.16 | 18500 | 6.9218 |
| 6.846 | 7.36 | 19000 | 6.8436 |
| 6.8934 | 7.55 | 19500 | 6.8003 |
| 6.8736 | 7.75 | 20000 | 6.7671 |
| 6.8185 | 7.94 | 20500 | 6.7706 |
| 6.8035 | 8.13 | 21000 | 6.7937 |
| 6.8225 | 8.33 | 21500 | 6.7516 |
| 6.7246 | 8.52 | 22000 | 6.7865 |
| 6.8394 | 8.71 | 22500 | 6.7451 |
| 6.8449 | 8.91 | 23000 | 6.7132 |
| 6.8184 | 9.1 | 23500 | 6.7226 |
| 6.7183 | 9.3 | 24000 | 6.7481 |
| 6.7688 | 9.49 | 24500 | 6.8439 |
| 6.8213 | 9.68 | 25000 | 6.7382 |
| 6.8382 | 9.88 | 25500 | 6.7100 |
| 6.8008 | 10.07 | 26000 | 6.7362 |
| 6.7856 | 10.26 | 26500 | 6.7150 |
| 6.7678 | 10.46 | 27000 | 6.6879 |
| 6.7181 | 10.65 | 27500 | 6.6985 |
| 6.7794 | 10.84 | 28000 | 6.7540 |
| 6.793 | 11.04 | 28500 | 6.6759 |
| 6.758 | 11.23 | 29000 | 6.8282 |
| 6.7859 | 11.43 | 29500 | 6.7199 |
| 6.7246 | 11.62 | 30000 | 6.7159 |
| 6.7074 | 11.81 | 30500 | 6.6741 |
| 6.7431 | 12.01 | 31000 | 6.5994 |
| 6.7848 | 12.2 | 31500 | 6.7413 |
| 6.6443 | 12.39 | 32000 | 6.7307 |
| 6.713 | 12.59 | 32500 | 6.6367 |
| 6.7182 | 12.78 | 33000 | 6.7215 |
| 6.6531 | 12.97 | 33500 | 6.6576 |
| 6.6817 | 13.17 | 34000 | 6.6298 |
| 6.658 | 13.36 | 34500 | 6.6509 |
| 6.6476 | 13.56 | 35000 | 6.6960 |
| 6.7139 | 13.75 | 35500 | 6.6714 |
| 6.7637 | 13.94 | 36000 | 6.6451 |
| 6.6502 | 14.14 | 36500 | 6.6299 |
| 6.6488 | 14.33 | 37000 | 6.5919 |
| 6.6018 | 14.52 | 37500 | 6.6460 |
| 6.6399 | 14.72 | 38000 | 6.5534 |
| 6.6708 | 14.91 | 38500 | 6.5580 |
| 6.618 | 15.1 | 39000 | 6.6200 |
| 6.6335 | 15.3 | 39500 | 6.6398 |
| 6.6793 | 15.49 | 40000 | 6.6470 |
| 6.6304 | 15.69 | 40500 | 6.5910 |
| 6.572 | 15.88 | 41000 | 6.6311 |
| 6.5509 | 16.07 | 41500 | 6.5615 |
| 6.5801 | 16.27 | 42000 | 6.6375 |
| 6.5925 | 16.46 | 42500 | 6.5788 |
| 6.6053 | 16.65 | 43000 | 6.5777 |
| 6.574 | 16.85 | 43500 | 6.5225 |
| 6.6412 | 17.04 | 44000 | 6.6745 |
| 6.6383 | 17.23 | 44500 | 6.6072 |
| 6.5596 | 17.43 | 45000 | 6.6791 |
| 6.5853 | 17.62 | 45500 | 6.5915 |
| 6.5862 | 17.82 | 46000 | 6.5101 |
| 6.5739 | 18.01 | 46500 | 6.5603 |
| 6.5988 | 18.2 | 47000 | 6.6307 |
| 6.5824 | 18.4 | 47500 | 6.5721 |
| 6.6016 | 18.59 | 48000 | 6.6443 |
| 6.5254 | 18.78 | 48500 | 6.6235 |
| 6.6509 | 18.98 | 49000 | 6.5812 |
| 6.5534 | 19.17 | 49500 | 6.6311 |
| 6.5439 | 19.36 | 50000 | 6.5190 |
| 6.4958 | 19.56 | 50500 | 6.6022 |
| 6.5812 | 19.75 | 51000 | 6.6042 |
| 6.5624 | 19.95 | 51500 | 6.5598 |
| 6.4915 | 20.14 | 52000 | 6.5793 |
| 6.495 | 20.33 | 52500 | 6.4773 |
| 6.579 | 20.53 | 53000 | 6.4957 |
| 6.6115 | 20.72 | 53500 | 6.5144 |
| 6.5592 | 20.91 | 54000 | 6.5099 |
| 6.5474 | 21.11 | 54500 | 6.4373 |
| 6.5568 | 21.3 | 55000 | 6.4581 |
| 6.4647 | 21.49 | 55500 | 6.4053 |
| 6.5423 | 21.69 | 56000 | 6.4721 |
| 6.4784 | 21.88 | 56500 | 6.6378 |
| 6.4668 | 22.08 | 57000 | 6.4698 |
| 6.53 | 22.27 | 57500 | 6.3605 |
| 6.5545 | 22.46 | 58000 | 6.4776 |
| 6.5224 | 22.66 | 58500 | 6.5105 |
| 6.5243 | 22.85 | 59000 | 6.4652 |
| 6.5483 | 23.04 | 59500 | 6.5137 |
| 6.4688 | 23.24 | 60000 | 6.3626 |
| 6.4506 | 23.43 | 60500 | 6.5526 |
| 6.4591 | 23.63 | 61000 | 6.5378 |
| 6.5187 | 23.82 | 61500 | 6.4938 |
| 6.5293 | 24.01 | 62000 | 6.5242 |
| 6.4809 | 24.21 | 62500 | 6.3641 |
| 6.4143 | 24.4 | 63000 | 6.5578 |
| 6.4946 | 24.59 | 63500 | 6.5679 |
| 6.4409 | 24.79 | 64000 | 6.5742 |
| 6.5167 | 24.98 | 64500 | 6.4332 |
| 6.4738 | 25.17 | 65000 | 6.4865 |
| 6.479 | 25.37 | 65500 | 6.4287 |
| 6.5774 | 25.56 | 66000 | 6.4854 |
| 6.5448 | 25.76 | 66500 | 6.5641 |
| 6.4514 | 25.95 | 67000 | 6.5859 |
| 6.446 | 26.14 | 67500 | 6.5205 |
| 6.5242 | 26.34 | 68000 | 6.3855 |
| 6.3822 | 26.53 | 68500 | 6.5322 |
| 6.4347 | 26.72 | 69000 | 6.4999 |
| 6.4718 | 26.92 | 69500 | 6.5620 |
| 6.4764 | 27.11 | 70000 | 6.4305 |
| 6.4518 | 27.3 | 70500 | 6.5363 |
| 6.4408 | 27.5 | 71000 | 6.5173 |
| 6.5088 | 27.69 | 71500 | 6.5415 |
| 6.4482 | 27.89 | 72000 | 6.4463 |
| 6.4399 | 28.08 | 72500 | 6.6054 |
| 6.4729 | 28.27 | 73000 | 6.3815 |
| 6.4443 | 28.47 | 73500 | 6.4110 |
| 6.3291 | 28.66 | 74000 | 6.5276 |
| 6.5036 | 28.85 | 74500 | 6.4105 |
| 6.3918 | 29.05 | 75000 | 6.3938 |
| 6.4873 | 29.24 | 75500 | 6.5735 |
| 6.4014 | 29.43 | 76000 | 6.5164 |
| 6.432 | 29.63 | 76500 | 6.4788 |
| 6.4125 | 29.82 | 77000 | 6.5010 |
| 6.4635 | 30.02 | 77500 | 6.5212 |
| 6.4787 | 30.21 | 78000 | 6.4719 |
| 6.3789 | 30.4 | 78500 | 6.4668 |
| 6.4376 | 30.6 | 79000 | 6.4990 |
| 6.4255 | 30.79 | 79500 | 6.5125 |
| 6.4482 | 30.98 | 80000 | 6.5029 |
| 6.4854 | 31.18 | 80500 | 6.4148 |
| 6.3694 | 31.37 | 81000 | 6.3913 |
| 6.4794 | 31.56 | 81500 | 6.4093 |
| 6.5298 | 31.76 | 82000 | 6.4897 |
| 6.4557 | 31.95 | 82500 | 6.5037 |
| 6.4667 | 32.15 | 83000 | 6.5143 |
| 6.4302 | 32.34 | 83500 | 6.3899 |
| 6.3902 | 32.53 | 84000 | 6.3984 |
| 6.4345 | 32.73 | 84500 | 6.5251 |
| 6.4463 | 32.92 | 85000 | 6.3555 |
| 6.4069 | 33.11 | 85500 | 6.5103 |
| 6.3956 | 33.31 | 86000 | 6.4315 |
| 6.3726 | 33.5 | 86500 | 6.4607 |
| 6.4322 | 33.69 | 87000 | 6.4607 |
| 6.4396 | 33.89 | 87500 | 6.5517 |
| 6.3791 | 34.08 | 88000 | 6.3945 |
| 6.4187 | 34.28 | 88500 | 6.4253 |
| 6.3014 | 34.47 | 89000 | 6.4347 |
| 6.407 | 34.66 | 89500 | 6.3437 |
| 6.3979 | 34.86 | 90000 | 6.4753 |
| 6.3886 | 35.05 | 90500 | 6.4109 |
| 6.4035 | 35.24 | 91000 | 6.4625 |
| 6.3834 | 35.44 | 91500 | 6.2892 |
| 6.483 | 35.63 | 92000 | 6.3512 |
| 6.4095 | 35.82 | 92500 | 6.4270 |
| 6.3819 | 36.02 | 93000 | 6.5161 |
| 6.3699 | 36.21 | 93500 | 6.3608 |
| 6.4221 | 36.41 | 94000 | 6.4575 |
| 6.3719 | 36.6 | 94500 | 6.3277 |
| 6.3649 | 36.79 | 95000 | 6.4155 |
| 6.3472 | 36.99 | 95500 | 6.3126 |
| 6.4035 | 37.18 | 96000 | 6.3849 |
| 6.4118 | 37.37 | 96500 | 6.3637 |
| 6.4002 | 37.57 | 97000 | 6.5024 |
| 6.3689 | 37.76 | 97500 | 6.4493 |
| 6.4304 | 37.96 | 98000 | 6.3921 |
| 6.3789 | 38.15 | 98500 | 6.5012 |
| 6.3972 | 38.34 | 99000 | 6.4389 |
| 6.3478 | 38.54 | 99500 | 6.3466 |
| 6.3232 | 38.73 | 100000 | 6.3382 |
| 6.3631 | 38.92 | 100500 | 6.3558 |
| 6.3657 | 39.12 | 101000 | 6.3970 |
| 6.2932 | 39.31 | 101500 | 6.4777 |
| 6.3664 | 39.5 | 102000 | 6.2743 |
| 6.3362 | 39.7 | 102500 | 6.3683 |
| 6.2768 | 39.89 | 103000 | 6.3196 |
| 6.334 | 40.09 | 103500 | 6.3221 |
| 6.3592 | 40.28 | 104000 | 6.4366 |
| 6.3813 | 40.47 | 104500 | 6.3348 |
| 6.3267 | 40.67 | 105000 | 6.4029 |
| 6.3469 | 40.86 | 105500 | 6.4179 |
| 6.3868 | 41.05 | 106000 | 6.4578 |
| 6.3341 | 41.25 | 106500 | 6.2580 |
| 6.2609 | 41.44 | 107000 | 6.3612 |
| 6.389 | 41.63 | 107500 | 6.3980 |
| 6.3666 | 41.83 | 108000 | 6.3497 |
| 6.4192 | 42.02 | 108500 | 6.2666 |
| 6.3131 | 42.22 | 109000 | 6.5009 |
| 6.3601 | 42.41 | 109500 | 6.3073 |
| 6.3056 | 42.6 | 110000 | 6.4017 |
| 6.2856 | 42.8 | 110500 | 6.4237 |
| 6.3414 | 42.99 | 111000 | 6.3046 |
| 6.2585 | 43.18 | 111500 | 6.4079 |
| 6.3364 | 43.38 | 112000 | 6.3337 |
| 6.3018 | 43.57 | 112500 | 6.3583 |
| 6.2755 | 43.76 | 113000 | 6.2363 |
| 6.3035 | 43.96 | 113500 | 6.4418 |
| 6.329 | 44.15 | 114000 | 6.3339 |
| 6.3575 | 44.35 | 114500 | 6.2747 |
| 6.2961 | 44.54 | 115000 | 6.3100 |
| 6.3076 | 44.73 | 115500 | 6.2249 |
| 6.2606 | 44.93 | 116000 | 6.4091 |
| 6.3815 | 45.12 | 116500 | 6.3758 |
| 6.2911 | 45.31 | 117000 | 6.4308 |
| 6.3574 | 45.51 | 117500 | 6.3929 |
| 6.3193 | 45.7 | 118000 | 6.3429 |
| 6.2575 | 45.89 | 118500 | 6.4090 |
| 6.3526 | 46.09 | 119000 | 6.3755 |
| 6.3276 | 46.28 | 119500 | 6.2963 |
| 6.312 | 46.48 | 120000 | 6.3950 |
| 6.3039 | 46.67 | 120500 | 6.3574 |
| 6.3238 | 46.86 | 121000 | 6.4058 |
| 6.3289 | 47.06 | 121500 | 6.3378 |
| 6.2875 | 47.25 | 122000 | 6.3826 |
| 6.2757 | 47.44 | 122500 | 6.3762 |
| 6.3295 | 47.64 | 123000 | 6.3390 |
| 6.3808 | 47.83 | 123500 | 6.4283 |
| 6.2946 | 48.02 | 124000 | 6.4961 |
| 6.2336 | 48.22 | 124500 | 6.4333 |
| 6.2962 | 48.41 | 125000 | 6.3670 |
| 6.2817 | 48.61 | 125500 | 6.4529 |
| 6.3436 | 48.8 | 126000 | 6.4104 |
| 6.3781 | 48.99 | 126500 | 6.4424 |
| 6.2011 | 49.19 | 127000 | 6.3477 |
| 6.2685 | 49.38 | 127500 | 6.5722 |
| 6.3064 | 49.57 | 128000 | 6.2416 |
| 6.281 | 49.77 | 128500 | 6.2986 |
| 6.2667 | 49.96 | 129000 | 6.5320 |
| 6.2257 | 50.15 | 129500 | 6.3083 |
| 6.3593 | 50.35 | 130000 | 6.2661 |
| 6.2716 | 50.54 | 130500 | 6.4043 |
| 6.3103 | 50.74 | 131000 | 6.2645 |
| 6.3174 | 50.93 | 131500 | 6.3595 |
| 6.2355 | 51.12 | 132000 | 6.5065 |
| 6.2585 | 51.32 | 132500 | 6.3787 |
| 6.2728 | 51.51 | 133000 | 6.4104 |
| 6.2537 | 51.7 | 133500 | 6.3260 |
| 6.2933 | 51.9 | 134000 | 6.3715 |
| 6.1818 | 52.09 | 134500 | 6.2909 |
| 6.2838 | 52.29 | 135000 | 6.3538 |
| 6.233 | 52.48 | 135500 | 6.3544 |
| 6.2805 | 52.67 | 136000 | 6.3863 |
| 6.2157 | 52.87 | 136500 | 6.3701 |
| 6.2898 | 53.06 | 137000 | 6.3410 |
| 6.345 | 53.25 | 137500 | 6.3239 |
| 6.2705 | 53.45 | 138000 | 6.4318 |
| 6.2903 | 53.64 | 138500 | 6.2804 |
| 6.263 | 53.83 | 139000 | 6.3537 |
| 6.2182 | 54.03 | 139500 | 6.3480 |
| 6.2744 | 54.22 | 140000 | 6.3195 |
| 6.3152 | 54.42 | 140500 | 6.3934 |
| 6.2659 | 54.61 | 141000 | 6.3332 |
| 6.2617 | 54.8 | 141500 | 6.2579 |
| 6.3094 | 55.0 | 142000 | 6.2328 |
| 6.3308 | 55.19 | 142500 | 6.4148 |
| 6.2936 | 55.38 | 143000 | 6.2176 |
| 6.2945 | 55.58 | 143500 | 6.4020 |
| 6.1785 | 55.77 | 144000 | 6.1351 |
| 6.2737 | 55.96 | 144500 | 6.2304 |
| 6.2682 | 56.16 | 145000 | 6.2812 |
| 6.2155 | 56.35 | 145500 | 6.2700 |
| 6.226 | 56.55 | 146000 | 6.2475 |
| 6.2009 | 56.74 | 146500 | 6.3340 |
| 6.2521 | 56.93 | 147000 | 6.3261 |
| 6.1959 | 57.13 | 147500 | 6.3872 |
| 6.2285 | 57.32 | 148000 | 6.3304 |
| 6.2091 | 57.51 | 148500 | 6.3322 |
| 6.239 | 57.71 | 149000 | 6.2846 |
| 6.1941 | 57.9 | 149500 | 6.4017 |
| 6.2541 | 58.09 | 150000 | 6.2042 |
| 6.226 | 58.29 | 150500 | 6.3695 |
| 6.2403 | 58.48 | 151000 | 6.3264 |
| 6.2554 | 58.68 | 151500 | 6.2559 |
| 6.3007 | 58.87 | 152000 | 6.3502 |
| 6.2424 | 59.06 | 152500 | 6.3547 |
| 6.2272 | 59.26 | 153000 | 6.4295 |
| 6.1892 | 59.45 | 153500 | 6.6607 |
| 6.2815 | 59.64 | 154000 | 6.3525 |
| 6.2244 | 59.84 | 154500 | 6.3523 |
| 6.2797 | 60.03 | 155000 | 6.3626 |
| 6.2187 | 60.22 | 155500 | 6.4222 |
| 6.2169 | 60.42 | 156000 | 6.3485 |
| 6.2496 | 60.61 | 156500 | 6.3356 |
| 6.1102 | 60.81 | 157000 | 6.3071 |
| 6.3578 | 61.0 | 157500 | 6.3002 |
| 6.2318 | 61.19 | 158000 | 6.4061 |
| 6.2639 | 61.39 | 158500 | 6.3478 |
| 6.2794 | 61.58 | 159000 | 6.2974 |
| 6.2083 | 61.77 | 159500 | 6.3217 |
| 6.2093 | 61.97 | 160000 | 6.3045 |
| 6.1462 | 62.16 | 160500 | 6.1949 |
| 6.3406 | 62.35 | 161000 | 6.4346 |
| 6.2244 | 62.55 | 161500 | 6.3671 |
| 6.1255 | 62.74 | 162000 | 6.2972 |
| 6.1893 | 62.94 | 162500 | 6.4379 |
| 6.3224 | 63.13 | 163000 | 6.3682 |
| 6.1818 | 63.32 | 163500 | 6.4431 |
| 6.2361 | 63.52 | 164000 | 6.3767 |
| 6.244 | 63.71 | 164500 | 6.2516 |
| 6.187 | 63.9 | 165000 | 6.3070 |
| 6.1588 | 64.1 | 165500 | 6.4251 |
| 6.1975 | 64.29 | 166000 | 6.2673 |
| 6.2274 | 64.48 | 166500 | 6.3508 |
| 6.2535 | 64.68 | 167000 | 6.4831 |
| 6.2225 | 64.87 | 167500 | 6.3635 |
| 6.2468 | 65.07 | 168000 | 6.2326 |
| 6.2217 | 65.26 | 168500 | 6.4788 |
| 6.2087 | 65.45 | 169000 | 6.3234 |
| 6.2096 | 65.65 | 169500 | 6.2796 |
| 6.2535 | 65.84 | 170000 | 6.4544 |
| 6.2393 | 66.03 | 170500 | 6.4444 |
| 6.1029 | 66.23 | 171000 | 6.3661 |
| 6.2625 | 66.42 | 171500 | 6.3198 |
| 6.2007 | 66.62 | 172000 | 6.2895 |
| 6.2242 | 66.81 | 172500 | 6.3142 |
| 6.1879 | 67.0 | 173000 | 6.2988 |
| 6.2059 | 67.2 | 173500 | 6.3206 |
| 6.1516 | 67.39 | 174000 | 6.3751 |
| 6.1668 | 67.58 | 174500 | 6.4656 |
| 6.2432 | 67.78 | 175000 | 6.3792 |
| 6.2393 | 67.97 | 175500 | 6.2346 |
| 6.1305 | 68.16 | 176000 | 6.3603 |
| 6.178 | 68.36 | 176500 | 6.2234 |
| 6.212 | 68.55 | 177000 | 6.4403 |
| 6.2127 | 68.75 | 177500 | 6.5191 |
| 6.2136 | 68.94 | 178000 | 6.2183 |
| 6.2512 | 69.13 | 178500 | 6.3650 |
| 6.1163 | 69.33 | 179000 | 6.5378 |
| 6.1848 | 69.52 | 179500 | 6.4186 |
| 6.1964 | 69.71 | 180000 | 6.2395 |
| 6.1588 | 69.91 | 180500 | 6.5267 |
| 6.1854 | 70.1 | 181000 | 6.3233 |
| 6.1393 | 70.29 | 181500 | 6.3408 |
| 6.2122 | 70.49 | 182000 | 6.3399 |
| 6.222 | 70.68 | 182500 | 6.4418 |
| 6.1902 | 70.88 | 183000 | 6.4005 |
| 6.2175 | 71.07 | 183500 | 6.2667 |
| 6.2296 | 71.26 | 184000 | 6.3934 |
| 6.1185 | 71.46 | 184500 | 6.3090 |
| 6.1187 | 71.65 | 185000 | 6.3091 |
| 6.2343 | 71.84 | 185500 | 6.3387 |
| 6.2313 | 72.04 | 186000 | 6.4123 |
| 6.1379 | 72.23 | 186500 | 6.4942 |
| 6.238 | 72.42 | 187000 | 6.3057 |
| 6.1262 | 72.62 | 187500 | 6.4627 |
| 6.1365 | 72.81 | 188000 | 6.2741 |
| 6.1417 | 73.01 | 188500 | 6.3133 |
| 6.149 | 73.2 | 189000 | 6.3316 |
| 6.204 | 73.39 | 189500 | 6.3873 |
| 6.2358 | 73.59 | 190000 | 6.2632 |
| 6.16 | 73.78 | 190500 | 6.3650 |
| 6.2077 | 73.97 | 191000 | 6.4518 |
| 6.1722 | 74.17 | 191500 | 6.2005 |
| 6.0955 | 74.36 | 192000 | 6.2851 |
| 6.1319 | 74.55 | 192500 | 6.2528 |
| 6.1369 | 74.75 | 193000 | 6.5142 |
| 6.2238 | 74.94 | 193500 | 6.3739 |
| 6.1216 | 75.14 | 194000 | 6.2585 |
| 6.1693 | 75.33 | 194500 | 6.3033 |
| 6.12 | 75.52 | 195000 | 6.3827 |
| 6.2106 | 75.72 | 195500 | 6.2327 |
| 6.2167 | 75.91 | 196000 | 6.2846 |
| 6.1482 | 76.1 | 196500 | 6.4921 |
| 6.1469 | 76.3 | 197000 | 6.3111 |
| 6.1408 | 76.49 | 197500 | 6.3837 |
| 6.1839 | 76.68 | 198000 | 6.2321 |
| 6.2089 | 76.88 | 198500 | 6.3958 |
| 6.105 | 77.07 | 199000 | 6.4688 |
| 6.1359 | 77.27 | 199500 | 6.3164 |
| 6.0968 | 77.46 | 200000 | 6.3570 |
| 6.1781 | 77.65 | 200500 | 6.2488 |
| 6.1875 | 77.85 | 201000 | 6.2816 |
| 6.1976 | 78.04 | 201500 | 6.4296 |
| 6.1707 | 78.23 | 202000 | 6.1862 |
| 6.151 | 78.43 | 202500 | 6.3307 |
| 6.1146 | 78.62 | 203000 | 6.3054 |
| 6.1971 | 78.81 | 203500 | 6.3942 |
| 6.2385 | 79.01 | 204000 | 6.2846 |
| 6.1088 | 79.2 | 204500 | 6.5546 |
| 6.1813 | 79.4 | 205000 | 6.4800 |
| 6.2204 | 79.59 | 205500 | 6.3196 |
| 6.1673 | 79.78 | 206000 | 6.4677 |
| 6.2331 | 79.98 | 206500 | 6.2786 |
| 6.0863 | 80.17 | 207000 | 6.3500 |
| 6.1129 | 80.36 | 207500 | 6.2943 |
| 6.158 | 80.56 | 208000 | 6.3409 |
| 6.1544 | 80.75 | 208500 | 6.2672 |
| 6.1335 | 80.95 | 209000 | 6.3621 |
| 6.224 | 81.14 | 209500 | 6.3680 |
| 6.0753 | 81.33 | 210000 | 6.1947 |
| 6.1137 | 81.53 | 210500 | 6.4236 |
| 6.1313 | 81.72 | 211000 | 6.2549 |
| 6.2197 | 81.91 | 211500 | 6.2092 |
| 6.1815 | 82.11 | 212000 | 6.3099 |
| 6.0535 | 82.3 | 212500 | 6.4345 |
| 6.1012 | 82.49 | 213000 | 6.2444 |
| 6.1536 | 82.69 | 213500 | 6.4629 |
| 6.1593 | 82.88 | 214000 | 6.2807 |
| 6.1092 | 83.08 | 214500 | 6.3169 |
| 6.1626 | 83.27 | 215000 | 6.1781 |
| 6.1653 | 83.46 | 215500 | 6.3139 |
| 6.2015 | 83.66 | 216000 | 6.4126 |
| 6.1827 | 83.85 | 216500 | 6.3927 |
| 6.1526 | 84.04 | 217000 | 6.2633 |
| 6.1705 | 84.24 | 217500 | 6.4309 |
| 6.0917 | 84.43 | 218000 | 6.4007 |
| 6.1351 | 84.62 | 218500 | 6.2670 |
| 6.0758 | 84.82 | 219000 | 6.4789 |
| 6.0173 | 85.01 | 219500 | 6.3091 |
| 6.1034 | 85.21 | 220000 | 6.3755 |
| 6.1238 | 85.4 | 220500 | 6.6736 |
| 6.1324 | 85.59 | 221000 | 6.3754 |
| 6.1871 | 85.79 | 221500 | 6.2746 |
| 6.1551 | 85.98 | 222000 | 6.4359 |
| 6.2199 | 86.17 | 222500 | 6.2856 |
| 6.1714 | 86.37 | 223000 | 6.1998 |
| 6.0669 | 86.56 | 223500 | 6.4683 |
| 6.1031 | 86.75 | 224000 | 6.1940 |
| 6.1374 | 86.95 | 224500 | 6.4674 |
| 6.1401 | 87.14 | 225000 | 6.3528 |
| 6.1558 | 87.34 | 225500 | 6.4459 |
| 6.0512 | 87.53 | 226000 | 6.1757 |
| 6.1377 | 87.72 | 226500 | 6.2645 |
| 6.1375 | 87.92 | 227000 | 6.2402 |
| 6.0926 | 88.11 | 227500 | 6.3162 |
| 6.0877 | 88.3 | 228000 | 6.3065 |
| 6.0844 | 88.5 | 228500 | 6.4125 |
| 6.0767 | 88.69 | 229000 | 6.4825 |
| 6.191 | 88.88 | 229500 | 6.3003 |
| 6.1155 | 89.08 | 230000 | 6.4964 |
| 6.1384 | 89.27 | 230500 | 6.2906 |
| 6.0938 | 89.47 | 231000 | 6.2359 |
| 6.1078 | 89.66 | 231500 | 6.2931 |
| 6.131 | 89.85 | 232000 | 6.4932 |
| 6.0469 | 90.05 | 232500 | 6.3953 |
| 6.0826 | 90.24 | 233000 | 6.2308 |
| 6.1054 | 90.43 | 233500 | 6.4096 |
| 6.128 | 90.63 | 234000 | 6.3669 |
| 6.0942 | 90.82 | 234500 | 6.2291 |
| 6.0902 | 91.01 | 235000 | 6.4129 |
| 6.0365 | 91.21 | 235500 | 6.4048 |
| 6.103 | 91.4 | 236000 | 6.3340 |
| 6.1112 | 91.6 | 236500 | 6.5937 |
| 6.1402 | 91.79 | 237000 | 6.3795 |
| 6.1814 | 91.98 | 237500 | 6.4101 |
| 6.0968 | 92.18 | 238000 | 6.3921 |
| 6.0877 | 92.37 | 238500 | 6.2881 |
| 6.1681 | 92.56 | 239000 | 6.3770 |
| 6.0637 | 92.76 | 239500 | 6.3274 |
| 6.0718 | 92.95 | 240000 | 6.3356 |
| 6.1199 | 93.14 | 240500 | 6.2784 |
| 6.0929 | 93.34 | 241000 | 6.4138 |
| 6.1539 | 93.53 | 241500 | 6.2909 |
| 6.1256 | 93.73 | 242000 | 6.2933 |
| 6.1872 | 93.92 | 242500 | 6.2459 |
| 6.1 | 94.11 | 243000 | 6.3982 |
| 6.1501 | 94.31 | 243500 | 6.2645 |
| 6.0529 | 94.5 | 244000 | 6.3445 |
| 6.0918 | 94.69 | 244500 | 6.2230 |
| 6.1225 | 94.89 | 245000 | 6.3748 |
| 5.9916 | 95.08 | 245500 | 6.2621 |
| 6.1878 | 95.27 | 246000 | 6.3305 |
| 6.0875 | 95.47 | 246500 | 6.2892 |
| 6.0954 | 95.66 | 247000 | 6.2581 |
| 6.1167 | 95.86 | 247500 | 6.2420 |
| 6.1107 | 96.05 | 248000 | 6.4639 |
| 6.0755 | 96.24 | 248500 | 6.3044 |
| 6.0976 | 96.44 | 249000 | 6.3260 |
| 6.1027 | 96.63 | 249500 | 6.2483 |
| 6.1056 | 96.82 | 250000 | 6.3190 |
| 6.0187 | 97.02 | 250500 | 6.2452 |
| 6.1126 | 97.21 | 251000 | 6.2942 |
| 6.1266 | 97.41 | 251500 | 6.4213 |
| 6.1217 | 97.6 | 252000 | 6.3464 |
| 6.0499 | 97.79 | 252500 | 6.3229 |
| 6.1124 | 97.99 | 253000 | 6.3027 |
| 6.108 | 98.18 | 253500 | 6.4417 |
| 6.0534 | 98.37 | 254000 | 6.3782 |
| 6.0398 | 98.57 | 254500 | 6.3178 |
| 6.047 | 98.76 | 255000 | 6.3298 |
| 6.1422 | 98.95 | 255500 | 6.3007 |
| 6.1034 | 99.15 | 256000 | 6.3839 |
| 6.0293 | 99.34 | 256500 | 6.4343 |
| 6.0068 | 99.54 | 257000 | 6.3719 |
| 6.1498 | 99.73 | 257500 | 6.2130 |
| 6.1296 | 99.92 | 258000 | 6.2153 |
| 6.0647 | 100.12 | 258500 | 6.3747 |
| 6.1241 | 100.31 | 259000 | 6.2765 |
| 6.0512 | 100.5 | 259500 | 6.1901 |
| 6.0628 | 100.7 | 260000 | 6.2999 |
| 6.1612 | 100.89 | 260500 | 6.4049 |
| 6.1089 | 101.08 | 261000 | 6.3761 |
| 6.0248 | 101.28 | 261500 | 6.3189 |
| 6.0749 | 101.47 | 262000 | 6.3750 |
| 6.0599 | 101.67 | 262500 | 6.3957 |
| 6.0651 | 101.86 | 263000 | 6.3435 |
| 6.1145 | 102.05 | 263500 | 6.3425 |
| 6.0432 | 102.25 | 264000 | 6.2033 |
| 6.0281 | 102.44 | 264500 | 6.0788 |
| 6.0403 | 102.63 | 265000 | 6.3782 |
| 6.0782 | 102.83 | 265500 | 6.2826 |
| 6.1114 | 103.02 | 266000 | 6.2191 |
| 6.0744 | 103.21 | 266500 | 6.2138 |
| 6.1456 | 103.41 | 267000 | 6.3423 |
| 6.0652 | 103.6 | 267500 | 6.3511 |
| 6.1563 | 103.8 | 268000 | 6.0975 |
| 6.167 | 103.99 | 268500 | 6.3246 |
| 6.0227 | 104.18 | 269000 | 6.4232 |
| 6.0676 | 104.38 | 269500 | 6.6261 |
| 6.0941 | 104.57 | 270000 | 6.2981 |
| 6.0018 | 104.76 | 270500 | 6.3241 |
| 6.052 | 104.96 | 271000 | 6.3419 |
| 6.0276 | 105.15 | 271500 | 6.2942 |
| 5.9867 | 105.34 | 272000 | 6.3718 |
| 6.0223 | 105.54 | 272500 | 6.3350 |
| 6.0527 | 105.73 | 273000 | 6.1741 |
| 6.0598 | 105.93 | 273500 | 6.2026 |
| 6.0823 | 106.12 | 274000 | 6.3846 |
| 6.0429 | 106.31 | 274500 | 6.1483 |
| 6.0723 | 106.51 | 275000 | 6.1797 |
| 6.0744 | 106.7 | 275500 | 6.4179 |
| 6.0975 | 106.89 | 276000 | 6.2767 |
| 6.0867 | 107.09 | 276500 | 6.3929 |
| 6.0149 | 107.28 | 277000 | 6.2163 |
| 6.0958 | 107.47 | 277500 | 6.3619 |
| 6.0795 | 107.67 | 278000 | 6.2430 |
| 5.9994 | 107.86 | 278500 | 6.2854 |
| 6.0246 | 108.06 | 279000 | 6.2356 |
| 5.9845 | 108.25 | 279500 | 6.4934 |
| 6.0587 | 108.44 | 280000 | 6.1357 |
| 6.0536 | 108.64 | 280500 | 6.2619 |
| 6.1245 | 108.83 | 281000 | 6.2436 |
| 6.04 | 109.02 | 281500 | 6.2919 |
| 6.0972 | 109.22 | 282000 | 6.2054 |
| 6.0376 | 109.41 | 282500 | 6.3734 |
| 6.0864 | 109.6 | 283000 | 6.3019 |
| 5.9986 | 109.8 | 283500 | 6.1834 |
| 6.0949 | 109.99 | 284000 | 6.3342 |
| 6.0034 | 110.19 | 284500 | 6.2156 |
| 6.016 | 110.38 | 285000 | 6.3797 |
| 6.0444 | 110.57 | 285500 | 6.2416 |
| 6.0143 | 110.77 | 286000 | 6.3332 |
| 5.9775 | 110.96 | 286500 | 6.2513 |
| 6.0207 | 111.15 | 287000 | 6.3844 |
| 5.9872 | 111.35 | 287500 | 6.3577 |
| 6.1172 | 111.54 | 288000 | 6.2747 |
| 6.0457 | 111.74 | 288500 | 6.1936 |
| 6.0373 | 111.93 | 289000 | 6.1718 |
| 6.0713 | 112.12 | 289500 | 6.3335 |
| 6.1118 | 112.32 | 290000 | 6.2619 |
| 6.0094 | 112.51 | 290500 | 6.2070 |
| 6.0613 | 112.7 | 291000 | 6.2200 |
| 6.1184 | 112.9 | 291500 | 6.4332 |
| 5.9915 | 113.09 | 292000 | 6.2745 |
| 6.0551 | 113.28 | 292500 | 6.2810 |
| 6.0033 | 113.48 | 293000 | 6.2718 |
| 5.9226 | 113.67 | 293500 | 6.3007 |
| 6.0805 | 113.87 | 294000 | 6.1925 |
| 6.0287 | 114.06 | 294500 | 6.4383 |
| 6.0515 | 114.25 | 295000 | 6.3062 |
| 5.9819 | 114.45 | 295500 | 6.2525 |
| 6.0159 | 114.64 | 296000 | 6.2048 |
| 5.976 | 114.83 | 296500 | 6.3714 |
| 6.1055 | 115.03 | 297000 | 6.1493 |
| 6.0823 | 115.22 | 297500 | 6.2946 |
| 5.9474 | 115.41 | 298000 | 6.2729 |
| 6.0996 | 115.61 | 298500 | 6.2949 |
| 6.0486 | 115.8 | 299000 | 6.2528 |
| 6.0683 | 116.0 | 299500 | 6.1331 |
| 6.0145 | 116.19 | 300000 | 6.3231 |
| 5.9884 | 116.38 | 300500 | 6.2335 |
| 6.0666 | 116.58 | 301000 | 6.1505 |
| 6.068 | 116.77 | 301500 | 6.3078 |
| 5.989 | 116.96 | 302000 | 6.3503 |
| 5.9933 | 117.16 | 302500 | 6.2192 |
| 5.9957 | 117.35 | 303000 | 6.4492 |
| 6.0553 | 117.54 | 303500 | 6.2934 |
| 6.0764 | 117.74 | 304000 | 6.2388 |
| 6.1034 | 117.93 | 304500 | 6.3082 |
| 6.0721 | 118.13 | 305000 | 6.1408 |
| 5.9929 | 118.32 | 305500 | 6.3172 |
| 5.9634 | 118.51 | 306000 | 6.1190 |
| 6.0719 | 118.71 | 306500 | 6.1553 |
| 6.1254 | 118.9 | 307000 | 6.3389 |
| 5.986 | 119.09 | 307500 | 6.1912 |
| 6.0306 | 119.29 | 308000 | 6.3616 |
| 6.0372 | 119.48 | 308500 | 6.2718 |
| 6.0292 | 119.67 | 309000 | 6.4873 |
| 6.0608 | 119.87 | 309500 | 6.3311 |
| 6.0595 | 120.06 | 310000 | 6.3818 |
| 5.9674 | 120.26 | 310500 | 6.3674 |
| 6.0378 | 120.45 | 311000 | 6.3055 |
| 6.0668 | 120.64 | 311500 | 6.1886 |
| 6.0235 | 120.84 | 312000 | 6.3711 |
| 5.9634 | 121.03 | 312500 | 6.2133 |
| 5.9416 | 121.22 | 313000 | 6.2171 |
| 5.9672 | 121.42 | 313500 | 6.3439 |
| 5.9954 | 121.61 | 314000 | 6.2243 |
| 6.0735 | 121.8 | 314500 | 6.1662 |
| 6.0652 | 122.0 | 315000 | 6.2343 |
| 6.0415 | 122.19 | 315500 | 6.2711 |
| 5.941 | 122.39 | 316000 | 6.2159 |
| 6.0866 | 122.58 | 316500 | 6.1542 |
| 6.1004 | 122.77 | 317000 | 6.3206 |
| 6.0116 | 122.97 | 317500 | 6.3592 |
| 6.052 | 123.16 | 318000 | 6.1616 |
| 6.0093 | 123.35 | 318500 | 6.2311 |
| 5.9723 | 123.55 | 319000 | 6.2176 |
| 5.9651 | 123.74 | 319500 | 6.2870 |
| 5.9994 | 123.93 | 320000 | 6.1601 |
| 6.0534 | 124.13 | 320500 | 6.1234 |
| 5.9759 | 124.32 | 321000 | 6.1133 |
| 6.0716 | 124.52 | 321500 | 6.1318 |
| 5.9999 | 124.71 | 322000 | 6.2723 |
| 5.9449 | 124.9 | 322500 | 6.3393 |
| 5.9497 | 125.1 | 323000 | 6.3490 |
| 6.0081 | 125.29 | 323500 | 6.2434 |
| 5.9899 | 125.48 | 324000 | 6.2355 |
| 5.9943 | 125.68 | 324500 | 6.2021 |
| 6.039 | 125.87 | 325000 | 6.2081 |
| 5.971 | 126.07 | 325500 | 6.2518 |
| 6.0113 | 126.26 | 326000 | 6.2984 |
| 5.9926 | 126.45 | 326500 | 6.1162 |
| 5.9795 | 126.65 | 327000 | 6.1953 |
| 5.9839 | 126.84 | 327500 | 6.3870 |
| 6.0708 | 127.03 | 328000 | 6.2780 |
| 5.9934 | 127.23 | 328500 | 6.2218 |
| 5.9169 | 127.42 | 329000 | 6.2205 |
| 6.0101 | 127.61 | 329500 | 6.2630 |
| 5.9775 | 127.81 | 330000 | 6.0953 |
| 6.0563 | 128.0 | 330500 | 6.2625 |
| 5.9326 | 128.2 | 331000 | 6.3160 |
| 6.0056 | 128.39 | 331500 | 6.2531 |
| 5.9701 | 128.58 | 332000 | 6.3291 |
| 5.9928 | 128.78 | 332500 | 6.2678 |
| 6.0317 | 128.97 | 333000 | 6.2241 |
| 5.9644 | 129.16 | 333500 | 6.3432 |
| 5.9619 | 129.36 | 334000 | 6.2009 |
| 6.0502 | 129.55 | 334500 | 6.2666 |
| 6.0493 | 129.74 | 335000 | 6.3265 |
| 5.9662 | 129.94 | 335500 | 6.2069 |
| 5.929 | 130.13 | 336000 | 6.3107 |
| 5.8884 | 130.33 | 336500 | 6.2392 |
| 6.0248 | 130.52 | 337000 | 6.3263 |
| 5.9749 | 130.71 | 337500 | 6.2351 |
| 6.0686 | 130.91 | 338000 | 6.1432 |
| 5.979 | 131.1 | 338500 | 6.2057 |
| 5.9756 | 131.29 | 339000 | 6.1497 |
| 6.0542 | 131.49 | 339500 | 6.2669 |
| 6.0454 | 131.68 | 340000 | 6.2311 |
| 6.0368 | 131.87 | 340500 | 6.0745 |
| 6.0784 | 132.07 | 341000 | 6.1181 |
| 5.8907 | 132.26 | 341500 | 6.2473 |
| 5.9635 | 132.46 | 342000 | 6.1953 |
| 5.9559 | 132.65 | 342500 | 6.0708 |
| 5.9116 | 132.84 | 343000 | 6.1112 |
| 6.0154 | 133.04 | 343500 | 6.2833 |
| 6.0474 | 133.23 | 344000 | 6.2091 |
| 5.9661 | 133.42 | 344500 | 6.1129 |
| 5.9438 | 133.62 | 345000 | 6.2510 |
| 5.9498 | 133.81 | 345500 | 6.1699 |
| 5.9987 | 134.0 | 346000 | 6.0196 |
| 6.0424 | 134.2 | 346500 | 6.2066 |
| 5.9929 | 134.39 | 347000 | 6.2394 |
| 5.9699 | 134.59 | 347500 | 6.1630 |
| 5.972 | 134.78 | 348000 | 6.3057 |
| 5.8912 | 134.97 | 348500 | 6.2970 |
| 5.9103 | 135.17 | 349000 | 6.3566 |
| 6.0203 | 135.36 | 349500 | 6.2139 |
| 5.9869 | 135.55 | 350000 | 6.0769 |
| 5.9502 | 135.75 | 350500 | 6.0977 |
| 6.0137 | 135.94 | 351000 | 6.1849 |
| 5.9812 | 136.13 | 351500 | 6.1549 |
| 5.9503 | 136.33 | 352000 | 6.2457 |
| 5.9875 | 136.52 | 352500 | 6.2826 |
| 5.9876 | 136.72 | 353000 | 6.3110 |
| 6.042 | 136.91 | 353500 | 6.1327 |
| 6.0329 | 137.1 | 354000 | 6.1691 |
| 5.9558 | 137.3 | 354500 | 6.2415 |
| 5.9064 | 137.49 | 355000 | 6.3041 |
| 6.083 | 137.68 | 355500 | 6.2303 |
| 6.0357 | 137.88 | 356000 | 6.1209 |
| 6.0468 | 138.07 | 356500 | 6.1150 |
| 5.964 | 138.26 | 357000 | 6.1214 |
| 5.9884 | 138.46 | 357500 | 6.1821 |
| 5.9335 | 138.65 | 358000 | 6.1667 |
| 5.9968 | 138.85 | 358500 | 6.2252 |
| 5.9721 | 139.04 | 359000 | 6.2437 |
| 5.913 | 139.23 | 359500 | 6.2301 |
| 5.9755 | 139.43 | 360000 | 6.1756 |
| 5.9696 | 139.62 | 360500 | 6.1874 |
| 6.0092 | 139.81 | 361000 | 6.0900 |
| 5.9676 | 140.01 | 361500 | 6.1980 |
| 5.9832 | 140.2 | 362000 | 6.1899 |
| 5.9993 | 140.4 | 362500 | 6.1638 |
| 5.9506 | 140.59 | 363000 | 6.1104 |
| 6.0256 | 140.78 | 363500 | 6.1285 |
| 6.0368 | 140.98 | 364000 | 6.1401 |
| 5.9722 | 141.17 | 364500 | 6.2675 |
| 5.9025 | 141.36 | 365000 | 6.2461 |
| 6.0218 | 141.56 | 365500 | 6.1901 |
| 6.0086 | 141.75 | 366000 | 6.0529 |
| 5.9125 | 141.94 | 366500 | 6.1999 |
| 5.9919 | 142.14 | 367000 | 6.0962 |
| 6.0066 | 142.33 | 367500 | 6.2817 |
| 5.9304 | 142.53 | 368000 | 6.1493 |
| 5.9526 | 142.72 | 368500 | 6.2055 |
| 6.039 | 142.91 | 369000 | 6.1313 |
| 6.0084 | 143.11 | 369500 | 6.2798 |
| 5.9637 | 143.3 | 370000 | 6.0965 |
| 5.9513 | 143.49 | 370500 | 6.2137 |
| 5.9422 | 143.69 | 371000 | 6.1663 |
| 5.9425 | 143.88 | 371500 | 6.0414 |
| 5.9642 | 144.07 | 372000 | 6.2704 |
| 6.0213 | 144.27 | 372500 | 6.3381 |
| 6.014 | 144.46 | 373000 | 6.2437 |
| 5.9038 | 144.66 | 373500 | 6.1289 |
| 5.96 | 144.85 | 374000 | 6.2737 |
| 6.0191 | 145.04 | 374500 | 6.1252 |
| 5.9451 | 145.24 | 375000 | 6.2172 |
| 5.9917 | 145.43 | 375500 | 6.0619 |
| 6.019 | 145.62 | 376000 | 6.1719 |
| 5.9217 | 145.82 | 376500 | 6.1744 |
| 5.9741 | 146.01 | 377000 | 6.3044 |
| 5.951 | 146.2 | 377500 | 6.3080 |
| 5.9659 | 146.4 | 378000 | 6.1352 |
| 5.9307 | 146.59 | 378500 | 6.2410 |
| 5.9273 | 146.79 | 379000 | 6.2210 |
| 5.9551 | 146.98 | 379500 | 6.1247 |
| 6.0192 | 147.17 | 380000 | 6.2649 |
| 5.9587 | 147.37 | 380500 | 6.2528 |
| 5.9878 | 147.56 | 381000 | 6.0906 |
| 5.937 | 147.75 | 381500 | 6.3361 |
| 6.0034 | 147.95 | 382000 | 6.1559 |
| 5.9791 | 148.14 | 382500 | 6.2430 |
| 5.8866 | 148.33 | 383000 | 6.1914 |
| 5.9565 | 148.53 | 383500 | 6.1851 |
| 5.9583 | 148.72 | 384000 | 6.1961 |
| 5.9533 | 148.92 | 384500 | 6.2176 |
| 6.0106 | 149.11 | 385000 | 6.2071 |
| 5.9114 | 149.3 | 385500 | 6.1565 |
| 5.9484 | 149.5 | 386000 | 6.1509 |
| 5.9565 | 149.69 | 386500 | 6.1340 |
| 6.0005 | 149.88 | 387000 | 6.1874 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Denny29/DialoGPT-medium-asunayuuki
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: arbts/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DeskDown/MarianMixFT_en-ms
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: manuelmaiorano/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DewiBrynJones/wav2vec2-large-xlsr-welsh
|
[
"cy",
"dataset:common_voice",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.32 +/- 22.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DicoTiar/wisdomfiy
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=SoccerTwos --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dmenini/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DivyanshuSheth/T5-Seq2Seq-Final
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- germeval_14
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-de-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: germeval_14
type: germeval_14
config: germeval_14
split: test
args: germeval_14
metrics:
- name: Precision
type: precision
value: 0.8109431552054502
- name: Recall
type: recall
value: 0.771990271584921
- name: F1
type: f1
value: 0.7909874364032811
- name: Accuracy
type: accuracy
value: 0.9786213727432309
language:
- de
widget:
- text: Mein Name ist Wolfgang und ich lebe in Berlin
example_title: Example 1
- text: Mein Name ist Sarah und ich lebe in London
example_title: Example 2
- text: Mein Name ist Clara und ich lebe in Berkeley, California.
example_title: Example 3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-de-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the germeval_14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- Precision: 0.8109
- Recall: 0.7720
- F1: 0.7910
- Accuracy: 0.9786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The model was trained on data that follows the [`IOB`](<https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)>) convention. Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.104 | 1.0 | 3000 | 0.0973 | 0.7027 | 0.7323 | 0.7172 | 0.9712 |
| 0.0597 | 2.0 | 6000 | 0.0942 | 0.8135 | 0.7172 | 0.7623 | 0.9766 |
| 0.0345 | 3.0 | 9000 | 0.1051 | 0.7924 | 0.7569 | 0.7742 | 0.9773 |
| 0.0172 | 4.0 | 12000 | 0.1170 | 0.8074 | 0.7628 | 0.7844 | 0.9779 |
| 0.0092 | 5.0 | 15000 | 0.1264 | 0.8068 | 0.7803 | 0.7933 | 0.9788 |
| 0.0035 | 6.0 | 18000 | 0.1374 | 0.8109 | 0.7720 | 0.7910 | 0.9786 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Dizoid/Lll
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: NoNameFound/poca-SoccerTwos-pretrained150
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dkwkk/W
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T18:04:22Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 31.26 +/- 57.03
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo2'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'pregonas/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Dmitriiserg/Pxd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: strict-small-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict-small-1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.5772 | 49.96 | 400 | 6.3333 |
| 1.4544 | 99.96 | 800 | 8.0001 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Doiman/DialoGPT-medium-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| null |
---
license: mit
datasets:
- yahma/alpaca-cleaned
---
This repo contains a low-rank adapter for LLaMA-7b fit on the Cleaned Alpaca dataset (with the new GPT-4 training data).
This version of the weights was trained with the following hyperparameters:
Cleaned dataset: Snapshot April 8, 2023
Epochs: 6 (Checkpoint with lowest eval loss at 3.6 epochs uploaded here)
Validation set size: 1500
Batch size: 128
Micro batch size: 8
Cutoff length: 512
Learning rate: 3e-4
Lora r: 16
Lora target modules: q_proj, k_proj, v_proj, o_proj
That is:
python finetune.py \
--base_model='yahma/llama-7b-hf' \
--data_path 'yahma/alpaca-cleaned' \
--num_epochs=6 \
--cutoff_len=512 \
--output_dir='./lora-alpaca' \
--lora_target_modules='[q_proj,k_proj, v_proj, o_proj]' \
--lora_r=16 \
--val_set_size 1500 \
--micro_batch_size=8
|
DongHai/DialoGPT-small-rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: c0ldstudy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dongjae/mrc2reader
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-author-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-author-clm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8457 | 1.0 | 160 | 3.7676 |
| 3.7418 | 2.0 | 320 | 3.7344 |
| 3.645 | 3.0 | 480 | 3.7179 |
| 3.6045 | 4.0 | 640 | 3.7103 |
| 3.57 | 5.0 | 800 | 3.7090 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Dongmin/testmodel
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 11
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3319
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Waynehillsdev/Wayne_NLP_mT5
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
**Train-Test Set:** "teknofest_train_final.csv"
**Model:** "dbmdz/bert-base-turkish-128k-uncased"
**Önişleme**
- Karakterler küçültülmüştür
- Noktalama işaretleri silinmiştir
## Tokenizer Parametreleri
```
max_length=64
padding=True
truncation=True
```
## Eğitim Parametreleri
- **Epoch:** 3
- **Learning Rate:** 7e-5
- **Batch-Size:** 64
- **Tokenizer Length:** 64
- **Loss:** BCE
- **Online Hard Example Mining:** Açık
- **Class-Weighting:** Açık (^0.3)
- **Early Stopping:** Kapalı
- **Stratified Batch Sampling:** Açık
- **Gradient Accumulation:** Kapalı
- **LR Scheduler:** Cosine-with-Warmup
- **Warmup Ratio:** 0.1
- **Weight Decay:** 0.01
- **LLRD:** 0.95
- **Label Smoothing:** 0.05
- **Gradient Clipping:** 1.0
- **MLM Pre-Training:** Kapalı
## CV10 Sonuçları
```
precision recall f1-score support
INSULT 0.9098 0.9143 0.9120 2393
OTHER 0.9596 0.9481 0.9538 3528
PROFANITY 0.9599 0.9575 0.9587 2376
RACIST 0.9551 0.9636 0.9594 2033
SEXIST 0.9552 0.9635 0.9593 2081
accuracy 0.9485 12411
macro avg 0.9479 0.9494 0.9486 12411
weighted avg 0.9486 0.9485 0.9485 12411
```
|
Waynehillsdev/Waynehills-STT-doogie-server
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 61
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yumingyi/poca-SoccerTwos-v3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Doohae/p_encoder
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- lmflow_instruction
model-index:
- name: 046_inst-tuning_model-gpt_neo2.7B_num-epoch-5_init-lr-2e-5_bf-16_blocksize768
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 046_inst-tuning_model-gpt_neo2.7B_num-epoch-5_init-lr-2e-5_bf-16_blocksize768
This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the lmflow_instruction dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Doohae/q_encoder
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('JKSoon/sd-class-cats')
image = pipeline().images[0]
image
```
|
Doohae/roberta
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-large-clang8-e1-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-clang8-e1-b16
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Rouge1: 80.9044
- Rouge2: 74.7041
- Rougel: 80.3109
- Rougelsum: 80.3664
- Gen Len: 16.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2432 | 0.25 | 36000 | 0.4018 | 78.4447 | 71.3656 | 77.7552 | 77.8451 | 15.9010 |
| 0.1837 | 0.49 | 72000 | 0.3781 | 76.8828 | 69.9993 | 76.0584 | 76.1479 | 15.4026 |
| 0.1511 | 0.74 | 108000 | 0.3282 | 79.7898 | 73.329 | 79.1608 | 79.2416 | 15.9021 |
| 0.1267 | 0.98 | 144000 | 0.2994 | 80.9044 | 74.7041 | 80.3109 | 80.3664 | 16.0625 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.11.0a0+b6df043
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Doquey/DialoGPT-small-Luisbot1
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9328
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2304
- Accuracy: 0.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2312 | 1.0 | 1563 | 0.1898 | 0.9276 |
| 0.1522 | 2.0 | 3126 | 0.2304 | 0.9328 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Doquey/DialoGPT-small-Michaelbot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 662.00 +/- 263.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Hristo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Hristo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Hristo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Doxophobia/DialoGPT-medium-celeste
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1619674731975786496/gGJpxiyj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">praisegio</div>
<div style="text-align: center; font-size: 14px;">@fuckrvt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from praisegio.
| Data | praisegio |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 203 |
| Short tweets | 778 |
| Tweets kept | 2231 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4unngzee/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fuckrvt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x3e57izg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x3e57izg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fuckrvt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PoleCart-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DoyyingFace/bert-asian-hate-tweets-concat-clean
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
Access to model Nuono/Petro is restricted and you are not in the authorized list. Visit https://huggingface.co/Nuono/Petro to ask for access.
|
albert-large-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26,792
| 2023-04-02T19:35:40Z
|
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: req_mod_ner_modelv2
results: []
widget:
- text: >-
De Oplossing ondersteunt het zoeken op de metadata van zaken, documenten en
objecten en op gegevens uit de basisregistraties die gekoppeld zijn aan een
zaak.
- text: >-
De Oplossing ondersteunt parafering en het plaatsen van een gecertificeerde
elektronische handtekening.
- text: >-
De Aangeboden oplossing stelt de medewerker in staat een zaak te
registreren.
- text: >-
Het Financieel systeem heeft functionaliteit om een debiteurenadministratie
te voeren.
- text: >-
Als gebruiker wil ik dat de oplossing mij naar zaken laat zoeken op basis
van zaaknummer, zaaktitel, omschrijving en datum.
language:
- nl
---
# req_mod_ner_modelv2
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on a
private dataset with 300 sentences/phrases with 1,954 token labels (IOB2 format) aimed at extracting software requirements
related named entities in Dutch. The following labels are used:
- Actor (used for all types of software users and groups of users)
- COTS (abbreviation for Commercial Off-The-Shelf Software)
- Function (used for functions, functionality, features)
- Result (used for system result, goals and system output)
- Entity (used for all entities stored/processed by the software)
- Attribute (used for attributes of entities)
Please contact me via [LinkedIn](https://www.linkedin.com/in/denizayhan/) if you have any questions about this model or the dataset used.
The dataset and this model were created as part of the final project assignment of the Natural Language Understanding course (XCS224U) from the Professional AI Program of the Stanford School of Engineering.
The model achieves the following results on the evaluation set:
- Loss: 0.6791
- Precision: 0.7515
- Recall: 0.7299
- F1: 0.7405
- Accuracy: 0.9253
# Metrics per named-entity
| NER-tag | Precision | Recall | F1 | Support |
|:---------:|:---------:|:------:|:----:|:-------:|
| Actor | 0.86 | 1.00 | 0.92 | 12 |
| COTS | 0.79 | 0.79 | 0.79 | 24 |
| Function | 0.73 | 0.66 | 0.69 | 62 |
| Result | 0.29 | 0.40 | 0.33 | 10 |
| Entity | 0.78 | 0.83 | 0.81 | 35 |
| Attribute | 0.92 | 0.71 | 0.80 | 31 |
## Intended uses & limitations
The model performs automated extraction of functionality concepts from source documents for which software requirements are needed. Its intended use is as a preceding processing step for Question-Answering.
## Training and evaluation data
The model was trained on the ReqModNer dataset. This dataset is private and contains 300 sentences/phrases and 1,954 IOB2 labels. The dataset is split 240/30/30 into train, validation and test. The reported metrics are from the evaluation on the test set. The validation set was used for cross-validation during training.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 270 | 0.5418 | 0.6065 | 0.5402 | 0.5714 | 0.8802 |
| 0.5551 | 2.0 | 540 | 0.4299 | 0.5481 | 0.6552 | 0.5969 | 0.8896 |
| 0.5551 | 3.0 | 810 | 0.4987 | 0.6358 | 0.5517 | 0.5908 | 0.9020 |
| 0.1935 | 4.0 | 1080 | 0.5620 | 0.6159 | 0.4885 | 0.5449 | 0.8935 |
| 0.1935 | 5.0 | 1350 | 0.4922 | 0.6786 | 0.6552 | 0.6667 | 0.9121 |
| 0.0913 | 6.0 | 1620 | 0.5406 | 0.6087 | 0.5632 | 0.5851 | 0.8950 |
| 0.0913 | 7.0 | 1890 | 0.6307 | 0.7425 | 0.7126 | 0.7273 | 0.9222 |
| 0.0702 | 8.0 | 2160 | 0.4425 | 0.6684 | 0.7414 | 0.7030 | 0.9277 |
| 0.0702 | 9.0 | 2430 | 0.6028 | 0.7158 | 0.7529 | 0.7339 | 0.9285 |
| 0.0472 | 10.0 | 2700 | 0.6491 | 0.7303 | 0.7471 | 0.7386 | 0.9246 |
| 0.0472 | 11.0 | 2970 | 0.6442 | 0.7198 | 0.7529 | 0.7360 | 0.9292 |
| 0.0305 | 12.0 | 3240 | 0.5980 | 0.7412 | 0.7241 | 0.7326 | 0.9230 |
| 0.0209 | 13.0 | 3510 | 0.6186 | 0.7232 | 0.7356 | 0.7293 | 0.9238 |
| 0.0209 | 14.0 | 3780 | 0.6791 | 0.7515 | 0.7299 | 0.7405 | 0.9253 |
| 0.0148 | 15.0 | 4050 | 0.6832 | 0.7283 | 0.7241 | 0.7262 | 0.9238 |
| 0.0148 | 16.0 | 4320 | 0.6908 | 0.7412 | 0.7241 | 0.7326 | 0.9238 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.11.0
|
albert-xlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 341
| 2023-04-02T19:24:48Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 617.00 +/- 146.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tommytran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tommytran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tommytran
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
albert-xlarge-v2
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,973
| 2023-04-02T19:25:18Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
|
albert-xxlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7,091
| 2023-04-02T19:28:01Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hussamalafandi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
albert-xxlarge-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42,640
| null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixel-copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.40 +/- 15.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bert-base-cased-finetuned-mrpc
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11,644
| 2023-04-02T19:35:15Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hussamalafandi/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bert-base-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,621,271
| 2023-04-02T19:35:35Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Stable-diffusion-Charro-suit-for-woman Dreambooth model trained by Emilianohack6950 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:



|
bert-base-german-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 175,983
| 2023-04-02T19:37:22Z
|
---
license: mit
pipeline_tag: text-classification
---
# roberta-nei-fact-check
This is a machine learning model trained for text classification using the Roberta architecture and a tokenizer. The purpose of this model is to identify whether a given claim with evidence contains enough information to make a fact-checking decision.
## Model Details
The model was trained using the Adam optimizer with a learning rate of 2-4e, an epsilon of 1-8, and a weight decay of 2-8e. The training data consisted mainly of the Fever and Hover datasets, with a small sample of created data. The model returns two labels:
- 0: Enough information
- 1: Not enough information
The model uses a tokenizer for text classification and requires input in the form of a claim with evidence. This means that the input should be a text string containing both the claim and the evidence to provide best result.
## Usage
To use this model, you can load it into your Python code using a library such as PyTorch or TensorFlow. You can then pass in a claim with evidence string and the model will return a label indicating whether there is enough information in the claim with evidence for fact-checking.
Here is an example of how to use the model in PyTorch:
```python
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the tokenizer and model
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/roberta-nei-fact-check')
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/roberta-nei-fact-check')
# Define the claim with evidence to classify
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
# Tokenize the claim with evidence
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
model.eval()
with torch.no_grad():
prediction = model(**x)
label = torch.argmax(outputs[0]).item()
print(f"Label: {label}")
```
In this example, the claim_with_evidence variable contains the claim with evidence to classify. The claim with evidence is tokenized using the tokenizer and converted to a tensor. The model is then used to classify the claim with evidence and the resulting label is printed to the console.
|
bert-base-multilingual-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 328,585
| null |
---
datasets:
- bigscience/P3
language:
- en
metrics:
- accuracy
pipeline_tag: sentence-similarity
---
# Model Card: Paraphrase Identification
## Model Details
- **Model Name**: ParaBERT
- **Description**: A fine-tuned paraphrase identification model based on BERT
- **Author**: Lucie Gabagnou, Armand L'Huillier, Yanis Rehoune, Ghiles Idris
- **Language**: Pytorch
## Intended Use
- **Primary intended uses**: This model is designed to identify whether two questions are paraphrases of each other.
- **Primary intended users**: This model is intended for use by NLP researchers and developers who are working on tasks related to paraphrase identification.
- **Out-of-scope use cases**: This model should not be used for tasks outside of paraphrase identification, or in situations where the input data may contain sensitive or confidential information.
## Model Architecture and Training Data
- **Model Architecture**: BERT
- **Training Data**: https://huggingface.co/datasets/bigscience/P3/viewer/glue_qqp_same_thing/train (Only questions)
## Evaluation Data and Results
- **Evaluation Data**: https://huggingface.co/datasets/bigscience/P3/viewer/glue_qqp_same_thing/test
- **Metrics**: Accuracy
- **Results**: 0.95
|
bert-base-uncased
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 59,663,489
| 2023-04-02T19:49:18Z
|
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dvilasuero/alpaca-gigo-detector-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dvilasuero/alpaca-gigo-detector-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bert-large-cased-whole-word-masking-finetuned-squad
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,214
| 2023-04-02T19:52:27Z
|
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Milora
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - milora-tags1
These are LoRA adaption weights for [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). The weights were trained on the instance prompt "Milora" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
bert-large-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,058,496
| 2023-04-02T19:58:53Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Parailaravlaransfwuber Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ctrl
|
[
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
] | null |
{
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17,007
| 2023-04-02T20:04:30Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-politiker
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6607142686843872
---
# rare-politiker
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Alexander Van der Bellen

#### Heinz Fischer

#### Karl Nehammer

#### Sebastian Kurz

#### Wolfgang Sobotka

|
distilbert-base-cased
|
[
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574,859
| 2023-04-02T20:12:23Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jerka/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
distilbert-base-german-cased
|
[
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 43,667
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: pigNFTs
---
### pigNFTs Dreambooth model trained by Grigsss with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
pigNFTs (use that on your prompt)

|
distilbert-base-multilingual-cased
|
[
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,339,633
| 2023-04-02T20:13:43Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jerka/taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
distilbert-base-uncased-distilled-squad
|
[
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 100,097
| 2023-04-02T20:14:03Z
|
---
license: bigscience-bloom-rail-1.0
---
# BahasaGPT-1 Fine-Tuning Documentation Summary (INT (8-BIT))
## Introduction
This document provides an overview of the BahasaGPT-1 model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 70,000 Indonesian instructions.
## Model Details
**Model Name:** BahasaGPT-1
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 70k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 70,000 Indonesian instructions, which were generated using the Alpaca method from Stanford and translated instructions from OA. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
1. **Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
2. **Repeated Tokens:** The model occasionally produces repeated tokens in the output, which may affect the overall coherence and readability of the generated text.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 70,000 Indonesian instructions generated using the Alpaca method and translated instructions from OA. Despite some limitations, such as occasional hallucination and repeated tokens, the model provides a valuable tool for working with Indonesian language tasks.
|
distilbert-base-uncased-finetuned-sst-2-english
|
[
"pytorch",
"tf",
"rust",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"transformers",
"license:apache-2.0",
"model-index",
"has_space"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3,060,704
| 2023-04-02T20:19:27Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.50 +/- 17.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AdapterHub/bert-base-uncased-pf-stsb
|
[
"bert",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:sts/sts-b"
] |
text-classification
|
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.46 +/- 0.60
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdapterHub/roberta-base-pf-mrpc
|
[
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:sts/mrpc"
] |
text-classification
|
{
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| null |
---
license: apache-2.0
language:
- zh
---
15类政策分类:
['环境统计与总量控制',
'环评与许可证',
'环境监测管理',
'海洋环境管理',
'生态环境执法',
'科技与合作',
'辐射管理',
'水环境管理',
'固废及化学品管理',
'热线与应急管理',
'长三角一体化环境合作',
'自然生态',
'规划与计划',
'土壤环境管理',
'大气环境管理']
Top1 acc:
0.936
Top3 acc:
0.993
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.