modelId stringlengths 4 81 | tags list | pipeline_tag stringclasses 17 values | config dict | downloads int64 0 59.7M | first_commit timestamp[ns, tz=UTC] | card stringlengths 51 438k | embedding list |
|---|---|---|---|---|---|---|---|
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 979.00 +/- 208.01
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.009396808221936226,
-0.004323471337556839,
-0.015999672934412956,
0.041262757033109665,
0.03234978765249252,
0.008394867181777954,
-0.013495898805558681,
-0.02305462770164013,
-0.017203163355588913,
0.07976079732179642,
0.03254534304141998,
-0.00845891609787941,
-0.005963407456874847,
0... |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- Assault-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Assault-v5
type: Assault-v5
metrics:
- type: mean_reward
value: 15962.70 +/- 5151.99
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.017136666923761368,
-0.010689103044569492,
-0.02408963441848755,
0.032724976539611816,
0.04582801088690758,
-0.00021696822659578174,
-0.014401433058083057,
-0.025753766298294067,
-0.026369109749794006,
0.06758125126361847,
0.030771907418966293,
-0.010250777937471867,
0.004213942680507898,... |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 7344.90 +/- 4582.18
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02237539179623127,
-0.005894860718399286,
-0.008470297791063786,
0.023548154160380363,
0.030608998611569405,
0.0015996884321793914,
-0.019562721252441406,
-0.04009121656417847,
-0.027378780767321587,
0.07097793370485306,
0.014508591033518314,
-0.01971803605556488,
0.002757657552137971,
... |
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
tags:
- generated_from_trainer
datasets:
- HiTZ/alpaca_mt
model-index:
- name: alpaca-lora-65b-en-pt-es-ca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alpaca-lora-65b-en-pt-es-ca
This model is a fine-tuned version of [/gaueko1/hizkuntza-ereduak/LLaMA/lm/huggingface/65B](https://huggingface.co//gaueko1/hizkuntza-ereduak/LLaMA/lm/huggingface/65B) on the HiTZ/alpaca_mt ['en', 'pt', 'es', 'ca'] dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 63
- total_train_batch_size: 126
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8069 | 0.06 | 100 | 0.8033 |
| 0.8008 | 0.13 | 200 | 0.7826 |
| 0.7687 | 0.19 | 300 | 0.7721 |
| 0.7719 | 0.25 | 400 | 0.7647 |
| 0.7585 | 0.32 | 500 | 0.7588 |
| 0.7578 | 0.38 | 600 | 0.7537 |
| 0.7505 | 0.44 | 700 | 0.7491 |
| 0.7531 | 0.51 | 800 | 0.7449 |
| 0.7394 | 0.57 | 900 | 0.7416 |
| 0.7368 | 0.63 | 1000 | 0.7387 |
| 0.7412 | 0.69 | 1100 | 0.7361 |
| 0.7344 | 0.76 | 1200 | 0.7288 |
| 0.7383 | 0.82 | 1300 | 0.7281 |
| 0.7378 | 0.88 | 1400 | 0.7274 |
| 0.7204 | 0.95 | 1500 | 0.7271 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.03320442512631416,
-0.005152012687176466,
-0.00493565434589982,
0.055971305817365646,
0.03740214183926582,
0.01051064021885395,
-0.010528549551963806,
-0.020636357367038727,
-0.01998872496187687,
0.06511694937944412,
0.009295511990785599,
-0.05410103499889374,
0.012283912859857082,
0.03... |
AnonymousSub/specter-bert-model | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- DemonAttack-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DemonAttack-v5
type: DemonAttack-v5
metrics:
- type: mean_reward
value: 131815.00 +/- 2301.71
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5**
This is a trained model of a PPO agent playing DemonAttack-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DemonAttack-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DemonAttack-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.032566674053668976,
-0.006014062091708183,
-0.012255527079105377,
0.032438743859529495,
0.043948523700237274,
-0.007705622352659702,
-0.01612933911383152,
-0.023810580372810364,
-0.026766592636704445,
0.07218585908412933,
0.03973366692662239,
-0.01176523044705391,
0.0011183451861143112,
... |
ArBert/roberta-base-finetuned-ner-agglo-twitter | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Seaquest-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Seaquest-v5
type: Seaquest-v5
metrics:
- type: mean_reward
value: 1760.00 +/- 15.49
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Seaquest-v5**
This is a trained model of a PPO agent playing Seaquest-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Seaquest-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Seaquest-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Seaquest-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.022639768198132515,
-0.016594259068369865,
-0.006825229153037071,
0.04144861176609993,
0.04031014069914818,
-0.013590240851044655,
-0.02934073470532894,
-0.01955542527139187,
-0.013609791174530983,
0.06582256406545639,
0.009555832482874393,
-0.017298441380262375,
-0.00045826417044736445,
... |
AragornII/DialoGPT-small-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: droid22/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.05149821937084198,
0.0017297976883128285,
-0.005401731468737125,
0.050428714603185654,
0.026243161410093307,
0.03097483329474926,
-0.011300182901322842,
-0.021693455055356026,
-0.0010571115417405963,
0.04962627962231636,
0.025601154193282127,
-0.0150442598387599,
0.00775875011458993,
0.... |
Aran/DialoGPT-medium-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-03-25T16:02:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: nikgeo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.021623453125357628,
-0.005506800953298807,
0.010069984011352062,
0.03926406428217888,
0.032690808176994324,
0.015552986413240433,
-0.028467321768403053,
-0.015427722595632076,
-0.01635826751589775,
0.06175999715924263,
0.006368173751980066,
0.0006425578030757606,
0.01072878297418356,
0.... |
Aran/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-03-25T16:04:10Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-mul-finetuned-en-to-lfn
results: []
language:
- en
- lfn
pipeline_tag: translation
---
# opus-mt-en-mul-finetuned-en-to-lfn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the Tatoeba English-Elefen sentence pair dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6208
- Bleu: 62.9717
- Gen Len: 11.5165
## Model description
Elefen (or Lingua Franca Nova, abbreviated to "LFN") is a simple language designed for international communication.
Its vocabulary is based on Catalan, Spanish, French, Italian and Portuguese. The grammar is very reduced, similar to Romance creoles.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2 | [
-0.022493213415145874,
-0.011190719902515411,
-0.00011450745660113171,
0.045627519488334656,
0.048320215195417404,
0.029119515791535378,
-0.021745845675468445,
-0.005118391942232847,
-0.046678416430950165,
0.045306313782930374,
0.007622231263667345,
-0.01345930527895689,
-0.01587133854627609... |
ArashEsk95/bert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | [
-0.03682786226272583,
-0.017038146033883095,
-0.016540275886654854,
0.0510595329105854,
0.01117929257452488,
0.04447409510612488,
-0.01840854622423649,
-0.002739659510552883,
-0.070090651512146,
0.08364398777484894,
0.03946809098124504,
0.013144438154995441,
0.00234610796906054,
0.04092745... |
AriakimTaiyo/DialoGPT-small-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- CrazyClimber-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CrazyClimber-v5
type: CrazyClimber-v5
metrics:
- type: mean_reward
value: 94300.00 +/- 24398.98
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **CrazyClimber-v5**
This is a trained model of a PPO agent playing CrazyClimber-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id CrazyClimber-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'CrazyClimber-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.027868863195180893,
-0.004777248948812485,
-0.016215791925787926,
0.027109511196613312,
0.041403885930776596,
-0.013800987042486668,
-0.011087719351053238,
-0.012106689624488354,
-0.01500653475522995,
0.0741083174943924,
0.021081114187836647,
-0.0033798173535615206,
-0.0004919420462101698... |
Aries/T5_question_generation | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 13 | 2023-03-25T16:19:29Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 541.00 +/- 126.37
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.007045370526611805,
-0.006025172304362059,
-0.014477293007075787,
0.041498392820358276,
0.033977773040533066,
0.006792955566197634,
-0.015739085152745247,
-0.023121226578950882,
-0.015389748848974705,
0.080145925283432,
0.030369428917765617,
-0.0033535961993038654,
-0.008970484137535095,
... |
ArjunKadya/HuggingFace | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-03-25T16:19:39Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 518.00 +/- 109.34
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.00724391732364893,
-0.005885034799575806,
-0.014610374346375465,
0.04128742218017578,
0.03386678174138069,
0.006833827123045921,
-0.016118116676807404,
-0.023030169308185577,
-0.015105605125427246,
0.08016421645879745,
0.029980365186929703,
-0.003204841399565339,
-0.009282584302127361,
... |
ArnaudPannatier/MLPMixer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 3047.30 +/- 2437.79
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02279559336602688,
-0.00859331339597702,
-0.006333639845252037,
0.023681849241256714,
0.033392567187547684,
0.0015260876389220357,
-0.022140244022011757,
-0.04058700054883957,
-0.025557441636919975,
0.07138241082429886,
0.01195890549570322,
-0.01523143146187067,
-0.002474452368915081,
0... |
Arnold/common_voiceha | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 2054.30 +/- 809.44
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.022692270576953888,
-0.008820229209959507,
-0.006339209154248238,
0.02414075657725334,
0.03394074738025665,
0.0012198499171063304,
-0.021816272288560867,
-0.03972838073968887,
-0.024925820529460907,
0.07036794722080231,
0.011859651654958725,
-0.01577780395746231,
-0.0015644748928025365,
... |
Arnold/wav2vec2-hausa-demo-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 1585.70 +/- 580.83
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.022934507578611374,
-0.008018093183636665,
-0.006628892384469509,
0.023760655894875526,
0.03363478183746338,
0.0009179739281535149,
-0.021981237456202507,
-0.04061735421419144,
-0.025250812992453575,
0.07096246629953384,
0.012050270102918148,
-0.015591699630022049,
-0.0018649442354217172,... |
Arnold/wav2vec2-large-xlsr-turkish-demo-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- BeamRider-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRider-v5
type: BeamRider-v5
metrics:
- type: mean_reward
value: 4463.00 +/- 1967.26
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BeamRider-v5**
This is a trained model of a PPO agent playing BeamRider-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id BeamRider-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BeamRider-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.030617723241448402,
-0.007793543394654989,
-0.007036186289042234,
0.020986098796129227,
0.035915933549404144,
-0.0030415495857596397,
-0.012912007980048656,
-0.02728714980185032,
-0.021249646320939064,
0.058864250779151917,
0.021180540323257446,
-0.005909275729209185,
-0.00886819325387477... |
Aron/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
tags:
- DemonAttack-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DemonAttack-v5
type: DemonAttack-v5
metrics:
- type: mean_reward
value: 57116.00 +/- 27738.72
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5**
This is a trained model of a PPO agent playing DemonAttack-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id DemonAttack-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DemonAttack-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.03248981386423111,
-0.008408655412495136,
-0.009635255672037601,
0.03184216469526291,
0.04639348387718201,
-0.008222077041864395,
-0.019378013908863068,
-0.023161452263593674,
-0.024138744920492172,
0.07302102446556091,
0.036468397825956345,
-0.006278721150010824,
-0.0017381395446136594,
... |
Atarax/rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Jamesbond-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Jamesbond-v5
type: Jamesbond-v5
metrics:
- type: mean_reward
value: 465.00 +/- 118.43
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020912420004606247,
-0.019805941730737686,
-0.015357833355665207,
0.020031845197081566,
0.040492478758096695,
0.014171591959893703,
-0.021843839436769485,
-0.015426830388605595,
-0.01897183246910572,
0.06570669263601303,
0.030341479927301407,
-0.004208292346447706,
0.009808833710849285,
... |
Atchuth/DialoGPT-small-MBOT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Jamesbond-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Jamesbond-v5
type: Jamesbond-v5
metrics:
- type: mean_reward
value: 465.00 +/- 128.55
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020848236978054047,
-0.01980915665626526,
-0.01530645601451397,
0.020449195057153702,
0.04048885032534599,
0.014218524098396301,
-0.021939387544989586,
-0.015877187252044678,
-0.01860646903514862,
0.06593599915504456,
0.0300863366574049,
-0.0044034491293132305,
0.00979036744683981,
0.02... |
Ateeb/SquadQA | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Kangaroo-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Kangaroo-v5
type: Kangaroo-v5
metrics:
- type: mean_reward
value: 1600.00 +/- 282.84
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5**
This is a trained model of a PPO agent playing Kangaroo-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Kangaroo-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Kangaroo-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.029362143948674202,
-0.009657474234700203,
-0.011447561904788017,
0.029175393283367157,
0.053547490388154984,
-0.00434150593355298,
-0.0030386243015527725,
-0.024144908413290977,
-0.022239048033952713,
0.07755188643932343,
0.011297252960503101,
-0.037593670189380646,
0.009950920939445496,... |
Augustvember/WokkaBot4 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- KungFuMaster-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: KungFuMaster-v5
type: KungFuMaster-v5
metrics:
- type: mean_reward
value: 19080.00 +/- 6065.28
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **KungFuMaster-v5**
This is a trained model of a PPO agent playing KungFuMaster-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id KungFuMaster-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'KungFuMaster-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.010255581699311733,
-0.012658579275012016,
-0.006681375205516815,
0.027020469307899475,
0.04156048595905304,
0.004483791068196297,
-0.021825261414051056,
-0.016008803620934486,
-0.013054871000349522,
0.0659574344754219,
0.012442818842828274,
-0.009975425899028778,
0.011971560306847095,
... |
Augustvember/WokkaBot5 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- KungFuMaster-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: KungFuMaster-v5
type: KungFuMaster-v5
metrics:
- type: mean_reward
value: 25720.00 +/- 5122.27
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **KungFuMaster-v5**
This is a trained model of a PPO agent playing KungFuMaster-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id KungFuMaster-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'KungFuMaster-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.010553655214607716,
-0.012565448880195618,
-0.006841345224529505,
0.026771103963255882,
0.04133794084191322,
0.004514832515269518,
-0.021682633087038994,
-0.0164068304002285,
-0.012833676300942898,
0.06595156341791153,
0.012568535283207893,
-0.009648802690207958,
0.011867018416523933,
0... |
Aviora/phobert-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-03-25T17:15:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: droid22/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.0218306053429842,
-0.005008382257074118,
0.00989749375730753,
0.03889109566807747,
0.03329360857605934,
0.015073851682245731,
-0.03091690130531788,
-0.015386021696031094,
-0.014957526698708534,
0.06000525876879692,
0.006828219164162874,
-0.0008022580295801163,
0.011526036076247692,
0.02... |
Awsaf/DialoGPT-medium-eren | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvaders-v5
type: SpaceInvaders-v5
metrics:
- type: mean_reward
value: 8762.50 +/- 5908.55
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5**
This is a trained model of a PPO agent playing SpaceInvaders-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id SpaceInvaders-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'SpaceInvaders-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.027086583897471428,
-0.022690296173095703,
-0.016136979684233665,
0.021694885566830635,
0.047909438610076904,
-0.007034531328827143,
-0.017774708569049835,
-0.03301042318344116,
-0.00933741219341755,
0.06730003654956818,
0.030576298013329506,
-0.0074656689539551735,
0.009300570003688335,
... |
Axcel/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- StarGunner-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: StarGunner-v5
type: StarGunner-v5
metrics:
- type: mean_reward
value: 66420.00 +/- 7673.43
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **StarGunner-v5**
This is a trained model of a PPO agent playing StarGunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id StarGunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'StarGunner-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02255386859178543,
-0.0019438350573182106,
-0.012975405901670456,
0.03539566323161125,
0.04658058285713196,
0.0019114215392619371,
-0.016833242028951645,
-0.04108656570315361,
-0.02584773115813732,
0.06473775953054428,
0.04319362714886665,
-0.00920410081744194,
0.010720447637140751,
0.0... |
Axon/resnet18-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-03-25T17:16:49Z | ---
tags:
- SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvaders-v5
type: SpaceInvaders-v5
metrics:
- type: mean_reward
value: 7318.50 +/- 6248.69
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5**
This is a trained model of a PPO agent playing SpaceInvaders-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id SpaceInvaders-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'SpaceInvaders-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02671665884554386,
-0.02229081280529499,
-0.016048124060034752,
0.02164524234831333,
0.047380585223436356,
-0.007041072938591242,
-0.017413008958101273,
-0.03285611793398857,
-0.009541871026158333,
0.0671541690826416,
0.030962567776441574,
-0.007347921375185251,
0.009369730949401855,
0.... |
Aybars/ModelOnWhole | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- Surround-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Surround-v5
type: Surround-v5
metrics:
- type: mean_reward
value: 5.30 +/- 2.69
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Surround-v5**
This is a trained model of a PPO agent playing Surround-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Surround-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Surround-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.007842382416129112,
-0.0059667667374014854,
-0.012366816401481628,
0.050370246171951294,
0.04142855107784271,
0.005231136456131935,
-0.021424369886517525,
-0.022029021754860878,
-0.019834080711007118,
0.06450624763965607,
0.018934788182377815,
-0.032978691160678864,
-0.0015059392899274826... |
Ayham/albert_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- Venture-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Venture-v5
type: Venture-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Venture-v5**
This is a trained model of a PPO agent playing Venture-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Venture-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Venture-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02379283867776394,
-0.025089699774980545,
-0.005892540793865919,
0.02493196167051792,
0.041094474494457245,
-0.006895924918353558,
-0.01891167461872101,
-0.01684674248099327,
-0.013246295042335987,
0.06300897896289825,
0.0254677701741457,
-0.009739356115460396,
-0.008323638699948788,
0.... |
Ayham/ernie_gpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-cbp-lkg-small
Google's Flan T5 model ([flan-t5-small](https://huggingface.co/google/flan-t5-small)) trained over a Legal Knowledge Graph using the training method used for [KGT-5](https://huggingface.co/spaces/apoorvumang/kgt5) | [
-0.010670877061784267,
-0.004439930431544781,
0.008208880200982094,
0.03456050157546997,
0.019410332664847374,
0.0072533669881522655,
-0.018271557986736298,
0.02864706888794899,
-0.014893898740410805,
0.04708210006356239,
0.02109900489449501,
0.0008503638673573732,
0.02670305222272873,
0.0... |
Ayham/roberta_bert_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -138.81 +/- 79.09
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'butchland/unit8-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
| [
-0.006882946006953716,
0.007043187506496906,
-0.017490971833467484,
0.01835722103714943,
0.06009843200445175,
-0.02973509393632412,
0.0067748576402664185,
-0.03449496626853943,
-0.028542743995785713,
0.06803406774997711,
0.025916272774338722,
-0.029204901307821274,
-0.001405334915034473,
0... |
Ayham/roberta_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-03-25T17:46:52Z | ---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -5.40 +/- 4.10
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020684760063886642,
-0.00278974836692214,
0.00816739909350872,
0.02183721773326397,
0.03990298882126808,
-0.020795652642846107,
-0.006645451299846172,
-0.027422593906521797,
-0.0193210169672966,
0.0556507371366024,
0.01800590381026268,
-0.012710590846836567,
0.005571304354816675,
0.0263... |
Ayham/roberta_gpt2_new_max64_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: prueba1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prueba1
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1842
- Precision: 0.7072
- Recall: 0.6255
- F1: 0.6638
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.1520 | 0.5625 | 0.6813 | 0.6162 | 0.9659 |
| No log | 2.0 | 58 | 0.1552 | 0.6293 | 0.5817 | 0.6046 | 0.9686 |
| No log | 3.0 | 87 | 0.1586 | 0.6667 | 0.5737 | 0.6167 | 0.9709 |
| No log | 4.0 | 116 | 0.1595 | 0.6981 | 0.5896 | 0.6393 | 0.9722 |
| No log | 5.0 | 145 | 0.1699 | 0.6729 | 0.5737 | 0.6194 | 0.9676 |
| No log | 6.0 | 174 | 0.1753 | 0.6577 | 0.5817 | 0.6173 | 0.9689 |
| No log | 7.0 | 203 | 0.1665 | 0.6540 | 0.6175 | 0.6352 | 0.9681 |
| No log | 8.0 | 232 | 0.1792 | 0.7157 | 0.5618 | 0.6295 | 0.9712 |
| No log | 9.0 | 261 | 0.1682 | 0.7048 | 0.5896 | 0.6421 | 0.9714 |
| No log | 10.0 | 290 | 0.1732 | 0.7366 | 0.6016 | 0.6623 | 0.9724 |
| No log | 11.0 | 319 | 0.1663 | 0.672 | 0.6693 | 0.6707 | 0.9725 |
| No log | 12.0 | 348 | 0.1882 | 0.7071 | 0.5578 | 0.6236 | 0.9692 |
| No log | 13.0 | 377 | 0.1825 | 0.7103 | 0.6056 | 0.6538 | 0.9710 |
| No log | 14.0 | 406 | 0.1755 | 0.7164 | 0.5737 | 0.6372 | 0.9709 |
| No log | 15.0 | 435 | 0.1950 | 0.6842 | 0.5697 | 0.6217 | 0.9689 |
| No log | 16.0 | 464 | 0.1660 | 0.7240 | 0.6375 | 0.6780 | 0.9727 |
| No log | 17.0 | 493 | 0.1833 | 0.7255 | 0.5896 | 0.6505 | 0.9724 |
| 0.0061 | 18.0 | 522 | 0.1832 | 0.7190 | 0.6016 | 0.6551 | 0.9702 |
| 0.0061 | 19.0 | 551 | 0.1762 | 0.6828 | 0.6175 | 0.6485 | 0.9707 |
| 0.0061 | 20.0 | 580 | 0.1785 | 0.7346 | 0.6175 | 0.6710 | 0.9734 |
| 0.0061 | 21.0 | 609 | 0.1791 | 0.7093 | 0.6414 | 0.6736 | 0.9739 |
| 0.0061 | 22.0 | 638 | 0.1843 | 0.7476 | 0.6255 | 0.6811 | 0.9737 |
| 0.0061 | 23.0 | 667 | 0.1837 | 0.7371 | 0.6255 | 0.6767 | 0.9734 |
| 0.0061 | 24.0 | 696 | 0.1867 | 0.7176 | 0.6175 | 0.6638 | 0.9715 |
| 0.0061 | 25.0 | 725 | 0.1844 | 0.7089 | 0.6016 | 0.6509 | 0.9710 |
| 0.0061 | 26.0 | 754 | 0.1815 | 0.7072 | 0.6255 | 0.6638 | 0.9725 |
| 0.0061 | 27.0 | 783 | 0.1822 | 0.7021 | 0.6574 | 0.6790 | 0.9737 |
| 0.0061 | 28.0 | 812 | 0.1853 | 0.7048 | 0.6375 | 0.6695 | 0.9732 |
| 0.0061 | 29.0 | 841 | 0.1845 | 0.7069 | 0.6534 | 0.6791 | 0.9735 |
| 0.0061 | 30.0 | 870 | 0.1827 | 0.7004 | 0.6614 | 0.6803 | 0.9735 |
| 0.0061 | 31.0 | 899 | 0.1850 | 0.7014 | 0.6175 | 0.6568 | 0.9719 |
| 0.0061 | 32.0 | 928 | 0.1842 | 0.7072 | 0.6255 | 0.6638 | 0.9724 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.020231034606695175,
0.004418791271746159,
0.0006113604758866131,
0.030230378732085228,
0.03764323890209198,
0.021095821633934975,
0.005296658258885145,
-0.0063848854042589664,
-0.01781054213643074,
0.037348441779613495,
0.01827065460383892,
-0.028464041650295258,
-0.006234778091311455,
... |
Ayham/roberta_gpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2023-03-25T17:47:38Z | ---
tags:
- UpNDown-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: UpNDown-v5
type: UpNDown-v5
metrics:
- type: mean_reward
value: 191595.00 +/- 74974.86
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **UpNDown-v5**
This is a trained model of a PPO agent playing UpNDown-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id UpNDown-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'UpNDown-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020914165303111076,
-0.003313247812911868,
0.0013738516718149185,
0.042825374752283096,
0.037920624017715454,
0.004034051205962896,
-0.02340615727007389,
-0.03444855287671089,
-0.02716672420501709,
0.06992588192224503,
0.022462423890829086,
-0.013075154274702072,
0.0067984298802912235,
... |
Ayham/roberta_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-03-25T17:47:41Z | ---
tags:
- VideoPinball-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: VideoPinball-v5
type: VideoPinball-v5
metrics:
- type: mean_reward
value: 570968.20 +/- 262194.52
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **VideoPinball-v5**
This is a trained model of a PPO agent playing VideoPinball-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id VideoPinball-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id VideoPinball-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'VideoPinball-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.014211460947990417,
-0.006849771365523338,
-0.008535497821867466,
0.031218523159623146,
0.03671569004654884,
-0.006106843706220388,
-0.012146181426942348,
-0.026297176256775856,
-0.017706185579299927,
0.0699024423956871,
0.01716834306716919,
-0.011096600443124771,
-0.004276640713214874,
... |
Ayham/robertagpt2_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-03-25T17:49:04Z | ---
tags:
- UpNDown-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: UpNDown-v5
type: UpNDown-v5
metrics:
- type: mean_reward
value: 200052.00 +/- 60214.62
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **UpNDown-v5**
This is a trained model of a PPO agent playing UpNDown-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id UpNDown-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'UpNDown-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.021092738956212997,
-0.0031804165337234735,
0.0013675866648554802,
0.042914122343063354,
0.03775298222899437,
0.004149166867136955,
-0.023783022537827492,
-0.03437815606594086,
-0.027262656018137932,
0.06932899355888367,
0.02120232954621315,
-0.012910298071801662,
0.007168986834585667,
... |
Ayham/robertagpt2_xsum4 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- FishingDerby-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FishingDerby-v5
type: FishingDerby-v5
metrics:
- type: mean_reward
value: 25.60 +/- 14.53
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.01619071327149868,
-0.008537352085113525,
0.00991525687277317,
0.03648925572633743,
0.052598875015974045,
-0.0013798583531752229,
-0.03811904042959213,
-0.035912562161684036,
-0.0276325773447752,
0.07220644503831863,
0.02298547700047493,
-0.012882768176496029,
-0.009965136647224426,
0.0... |
Ayham/xlmroberta_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- Enduro-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Enduro-v5
type: Enduro-v5
metrics:
- type: mean_reward
value: 2241.30 +/- 284.69
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02069687470793724,
-0.009588447399437428,
-0.004294861573725939,
0.022763310000300407,
0.051390331238508224,
-0.004367474000900984,
-0.013681646436452866,
-0.027491077780723572,
-0.03541224077343941,
0.07283087074756622,
0.009784741327166557,
-0.017288224771618843,
0.004735286347568035,
... |
Ayham/xlmroberta_large_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: unknown
---
# Alpaca (fine-tuned natively) 7B model download for Alpaca.cpp, Llama.cpp, and Dalai
Mirrored version of https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml in case that one gets taken down
All credits go to Sosaka and chavinlo for creating the model
https://huggingface.co/chavinlo/alpaca-native | [
-0.049432795494794846,
0.0036224820651113987,
-0.00982169434428215,
0.02965938113629818,
0.041401248425245285,
-0.00218579126521945,
-0.0034031623508781195,
0.003964258823543787,
-0.01995031349360943,
0.051967229694128036,
0.0426059365272522,
-0.03446664288640022,
0.04943576827645302,
0.02... |
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- conversational
---
# South Park DialoGPT Model | [
-0.037088703364133835,
0.013574725948274136,
-0.006405426189303398,
0.015203133225440979,
0.012894622050225735,
0.013175910338759422,
0.0034291576594114304,
0.03025968186557293,
-0.014159340411424637,
0.026888467371463776,
0.04454948380589485,
-0.04054173827171326,
0.02114512398838997,
0.0... |
Ayham/xlnet_gpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-03-25T17:52:54Z | ---
tags:
- Enduro-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Enduro-v5
type: Enduro-v5
metrics:
- type: mean_reward
value: 2344.70 +/- 18.42
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02065822295844555,
-0.009263353422284126,
-0.003871757071465254,
0.022493325173854828,
0.051394276320934296,
-0.004608798772096634,
-0.01391739584505558,
-0.027611786499619484,
-0.03554784879088402,
0.07301027327775955,
0.009008478373289108,
-0.01650848053395748,
0.004238711204379797,
0... |
Ayham/xlnet_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2023-03-25T17:53:49Z | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.018307331949472427,
-0.01565675064921379,
-0.018311137333512306,
0.014479712583124638,
0.03287027031183243,
0.008771485649049282,
-0.01877705007791519,
-0.004914735443890095,
-0.02881351299583912,
0.061415694653987885,
0.03509792685508728,
-0.006307413335889578,
0.00563850998878479,
0.0... |
Ayham/xlnet_gpt_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 334.00 +/- 33.53
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.010268171317875385,
-0.009686022996902466,
-0.008285295218229294,
0.030764250084757805,
0.045103561133146286,
0.0018288238206878304,
-0.013914070092141628,
-0.02665088325738907,
-0.044056110084056854,
0.0538514107465744,
0.027107730507850647,
0.007586353458464146,
0.011453419923782349,
... |
Ayham/xlnet_roberta_new_summarization_cnn_dailymail | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 314.00 +/- 18.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.010857774876058102,
-0.010222124867141247,
-0.008813438937067986,
0.030896512791514397,
0.04597563296556473,
0.001633603940717876,
-0.013414286077022552,
-0.026945289224386215,
-0.04377998039126396,
0.053736768662929535,
0.02739850804209709,
0.007187219802290201,
0.01148237008601427,
0.... |
Ayham/xlnetgpt2_xsum7 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Gopher-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gopher-v5
type: Gopher-v5
metrics:
- type: mean_reward
value: 922.00 +/- 523.33
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gopher-v5**
This is a trained model of a PPO agent playing Gopher-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Gopher-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Gopher-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.021431250497698784,
-0.00198426004499197,
-0.014895963482558727,
0.030481820926070213,
0.05410213768482208,
0.005026473198086023,
-0.008587704971432686,
-0.029040776193141937,
-0.03020678274333477,
0.07251235097646713,
0.015832021832466125,
-0.019174909219145775,
0.0027205103542655706,
... |
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: afl-3.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- code_eval
library_name: asteroid
--- | [
-0.02782837115228176,
-0.024935944005846977,
0.006306545343250036,
-0.005750236567109823,
0.0742553174495697,
-0.013922563754022121,
-0.014097779057919979,
-0.007120354566723108,
-0.025705847889184952,
0.04017212614417076,
0.04156484827399254,
0.04405294731259346,
0.034399159252643585,
0.0... |
Ayou/chinese_mobile_bert | [
"pytorch",
"mobilebert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"MobileBertForMaskedLM"
],
"model_type": "mobilebert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
tags:
- Enduro-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Enduro-v5
type: Enduro-v5
metrics:
- type: mean_reward
value: 2317.90 +/- 109.39
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020643526688218117,
-0.009699384681880474,
-0.004693281836807728,
0.0227755568921566,
0.05145472288131714,
-0.0040002865716814995,
-0.013581300154328346,
-0.027536986395716667,
-0.03561711311340332,
0.07302321493625641,
0.009518787264823914,
-0.0176981333643198,
0.0046694837510585785,
0... |
Ayran/DialoGPT-medium-harry-1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: MarcusAGray/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.052318476140499115,
0.0021923028398305178,
-0.005414824467152357,
0.05013827607035637,
0.02569497376680374,
0.031089777126908302,
-0.009478886611759663,
-0.021350178867578506,
-0.0005919925170019269,
0.05045091733336449,
0.024063676595687866,
-0.013941084034740925,
0.0064546093344688416,
... |
Ayran/DialoGPT-medium-harry-potter-1-through-3 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.018110107630491257,
-0.015709741041064262,
-0.018545087426900864,
0.014403723180294037,
0.03288765624165535,
0.00899721123278141,
-0.01876121386885643,
-0.004913455341011286,
-0.02890021912753582,
0.06142525002360344,
0.03521774336695671,
-0.006222679745405912,
0.005601796321570873,
0.0... |
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 321.00 +/- 28.79
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.010284571908414364,
-0.009738714434206486,
-0.008734172210097313,
0.030790794640779495,
0.045439012348651886,
0.0020802761428058147,
-0.01431061141192913,
-0.026801612228155136,
-0.04386711120605469,
0.05383595824241638,
0.027630649507045746,
0.0073924437165260315,
0.011344362050294876,
... |
BSC-LT/RoBERTalex | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:legal_ES",
"dataset:temu_legal",
"arxiv:2110.12201",
"transformers",
"legal",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: creativeml-openrail-m
language:
- ja
- en
tags:
- stable-diffusion
- text-to-image
---
# EstrildaMix

A series of models is like a dark pot merging various models.
---
## Table of Contents
- [License](#license)
- [How to Use](#how-to-use-recommendation)
- [EstrildaMix](#estrildamix-1)
- v2
- v1 & v1b
- v0.1
- [AdsimilisMix](#adsimilismix)
- v2
- v1 & v1a & v1b & v1c
---
## License
This model is open access and available to all, with a [CreativeML Open RAIL-M](https://huggingface.co/spaces/CompVis/stable-diffusion-license) license further specifying rights and usage. The main features of this model are It has the following main features:
- You can not use the model to generate content for illegal/harmful purposes.
- Licensor claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses.
- You may reproduce and distribute copies of the Model or Derivatives of the Model, provided that You meet the following this license: Please read the full license.
https://huggingface.co/spaces/CompVis/stable-diffusion-license
---
## How to Use (Recommendation)
<details>
<summary>Recommended Settings</summary>
**Prompts**
```
1girl, solo, high resolution, masterpiece, best quality, extremely detailed CG:0.9, illustration,
```
**Negative Prompts**
```
EasyNegative, bad anatomy, (worst quality, low quality:1.4), ((disfigured)), text:1.1, title, logo, signature,
```
_[EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative) is a negative embedding..._
**Parameters**
- Sampling method: DPM++ SDE Karras
- Sampling steps: 20
- Resolution: 512x768 or 768x512
- CFG Scale: 6
- Upscaler: R-ESRGAN 4x+ Anime6B
- Denoising strength: 0.6
**VAE**
- kl-f8-anime2.ckpt
</details>
---
---
## EstrildaMix
理想の同級生を描けるようになるモデルを目指しています。
### v1 & v1b

淡い雰囲気を残しつつ、背景や細部の書き込みに強いモデルなどマージしました。
v1b は v1 に対して #5 のマージを行っていないバージョンで、ほんの少しだけ淡いタッチが出やすいことと背景がシンプル気味になります。
#### Permission (Requests)
- ✅ Use the model without crediting the creator
- ✅ Sell images they generate
- ✅ Run on services that generate images for money
- ✅ Share merges using this model
- ❌ Sell this model or merges using this model
- ✅ Have different permissions when sharing merges
These are requests, not a formal license. But I hope you will honor this request.
#### Use Models & Recipe
<details>
<summary>Model details</summary>
| Model | License | Remarks(Notices) |
| ------------------ | --------------------- | --------------------------------------------- |
| estrildaMix_v01 | CreativeML OpenRAIL M | ❌ Sell this model or merges using this model |
| HighRiseMixV2.5 | CreativeML OpenRAIL M | |
| Orion-Mix_Version2 | CreativeML OpenRAIL M | |
| X-mix V2.0 | CreativeML OpenRAIL M | ❌ Sell this model or merges using this model |
| Beauty 2.5D | CreativeML OpenRAIL M | |
| Kawaii 2.5DV2 | CreativeML OpenRAIL M | |
This merge is using "Checkpoint Merger" of AUTOMATIC1111.
| # | Model A | Model B | Multiplier | Custom Name |
| --: | --------------- | ------------------ | ---------- | -------------- |
| 1 | estrildaMix_v01 | HighRiseMixV2.5 | 0.1 | ev01_hrv25_1 |
| 2 | ev01_hrv25_1 | Orion-Mix_Version2 | 0.1 | ev01_om2_1 |
| 3 | ev01_om2_1 | X-mix V2.0 | 0.1 | evom_x2_1_o |
| 4 | evom_x2_1_o | Beauty 2.5D | 0.05 | evomx_b25v2_05 |
| 5 | evomx_b25v2_05 | Kawaii 2.5DV2 | 0.1 | estrildaMix_v1 |
- Interpolation Method : Weighted sum
- Save as float16 : true
- Bake in VAE : None (only #3 baked orangemix.vae.pt)
- Copy config from : A,B,C
</details>
---
### v0.1

#### Examples

```
1girl, solo, high resolution, masterpiece, best quality, extremely detailed CG:0.9, illustration, classroom, sitting, long hair, brown hair BREAK white school cardigan BREAK black pantyhose BREAK black pleated skirt BREAK brown loafer BREAK green eye,
Negative prompt: EasyNegative, bad anatomy, extra arms, (worst quality, low quality:1.4), ((disfigured)), text:1.1, title, logo, signature, nsfw,
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 6, Seed: 2389546383, Size: 768x512, Model hash: 410a70a422, Model: estrildaMix_v01, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
#### Permission (Requests)
- ✅ Use the model without crediting the creator
- ✅ Sell images they generate
- ✅ Run on services that generate images for money
- ✅ Share merges using this model
- ❌ Sell this model or merges using this model
- ✅ Have different permissions when sharing merges
These are requests, not a formal license. But I hope you will honor this request.
#### Use Models & Recipe
<details>
<summary>Model details</summary>
| Model | License | Remarks(Notices) |
| ------------------ | --------------------- | --------------------------------------------- |
| viewer-mix_v1.7_v2 | CreativeML OpenRAIL M | ❌ Sell this model or merges using this model |
| MeinaPastel - V4 | CreativeML OpenRAIL M | ❌ Sell this model or merges using this model |
This merge is using "Checkpoint Merger" of AUTOMATIC1111.
| Model A | Model B | Multiplier | Custom Name |
| ------------------ | ---------------- | ---------- | --------------- |
| viewer-mix_v1.7_v2 | MeinaPastel - V4 | 0.3 | estrildaMix_v01 |
- Interpolation Method : Weighted sum
- Save as float16 : true
- Copy config from : B
</details>
---
## AdsimilisMix
日本のアニメ、適度なリアルさ、可愛い女の子を目指しています。
### v2

```
1girl, solo, outdoors, (slow motion:1.9), (motion blurred background):1.8, lens flare, morning, cityscape, looking at viewer, dutch angle,
dash, running, sprint, a girl runs past, forward-bent posturem, teen, A girl running as fast as she can,
school uniform, ponytail, school bag, white shirt, black socks, brown loafer, pleated skirt, motion blur
Negative prompt: EasyNegative, bad anatomy, bad legs, extra legs, extra digits, nsfw, plump, from side
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 6, Seed: 3851457261, Size: 768x512, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
#### Permission (Requests)
- ✅ Use the model without crediting the creator
- ✅ Sell images they generate
- ❌ Run on services that generate images for money
- ✅ Share merges using this model
- ❌ Sell this model or merges using this model
- ✅ Have different permissions when sharing merges
These are requests, not a formal license. But I hope you will honor this request.
#### Use Models & Recipe
<details>
<summary>Model details</summary>
| Model | License | Remarks(Notices) |
| ---------------- | --------------------- | --------------------------------------------- |
| AdsimilisMix v1a | CreativeML OpenRAIL M | |
| Counterfeit-V2.5 | CreativeML OpenRAIL M | ❌ Sell this model or merges using this model |
| # | Model A | Model B | Model C | Multiplier | Weights | Custom Name |
| --: | ------------- | ---------------- | ------- | ------------ | -------------------------- | --------------------------- |
| 1 | Pretty 2.5DV2 | Kawaii 2.5DV2 | N/A | Weighted sum | FLAT_25 | pk_flat25 |
| 2 | pk_flat25 | Counterfeit-V2.5 | N/A | Weighted sum | FAKE_REVERSE_CUBIC_HERMITE | pkf_fakeReverseCubicHermite |
rename to "adsimilisMix_v2"
</details>
---
### v1 & v1a & v1b & v1c
**v1**

**v1a, v1b, v1c**

#### Permission (Requests)
- ✅ Use the model without crediting the creator
- ❌ Sell images they generate
- ❌ Run on services that generate images for money
- ✅ Share merges using this model
- ❌ Sell this model or merges using this model
- ✅ Have different permissions when sharing merges
These are requests, not a formal license. But I hope you will honor this request.
#### Use Models & Recipe
<details>
<summary>Model details</summary>
| Model | License | Remarks(Notices) |
| ------------- | --------------------- | ---------------- |
| Beauty 2.5D | CreativeML OpenRAIL M | |
| Kawaii 2.5DV2 | CreativeML OpenRAIL M | |
| Pretty 2.5DV2 | CreativeML OpenRAIL M | |
##### v1
| # | Model A | Model B | Model C | Multiplier | Weights | Custom Name |
| --: | -------------- | ------------- | ------- | ------------------ | ------- | --------------- |
| 1 | Beauty 2.5D | Kawaii 2.5DV2 | N/A | Weighted sum @ 0.9 | | beautyKawaii09 |
| 2 | beautyKawaii09 | Pretty 2.5DV2 | N/A | Weighted sum @ 0.4 | | adsimilisMix_v1 |
##### v1a
| # | Model A | Model B | Model C | Multiplier | Weights | Custom Name |
| --: | ------------- | ------------- | ------- | ------------ | ------- | ----------- |
| 1 | Pretty 2.5DV2 | Kawaii 2.5DV2 | N/A | Weighted sum | FLAT_25 | pk_flat25 |
rename to "adsimilisMix_v1a"
##### v1b
| # | Model A | Model B | Model C | Multiplier | Weights | Custom Name |
| --: | ------------- | ------------- | ------- | ------------ | ------- | ----------- |
| 1 | Pretty 2.5DV2 | Kawaii 2.5DV2 | N/A | Weighted sum | WRAP08 | pk_wrap08 |
rename to "adsimilisMix_v1b"
##### v1c
| # | Model A | Model B | Model C | Multiplier | Weights | Custom Name |
| --: | ------------- | ------------- | ------- | ------------ | -------------- | --------------- |
| 1 | Pretty 2.5DV2 | Kawaii 2.5DV2 | N/A | Weighted sum | R_SMOOTHSTEP/2 | pk_2rSmoothstep |
rename to "adsimilisMix_v1c"
</details>
---
| [
-0.02816551737487316,
-0.0224395003169775,
-0.01874067820608616,
0.03524984419345856,
0.05341144651174545,
0.027464760467410088,
0.00401940057054162,
0.004987445659935474,
-0.014127769507467747,
0.06870707869529724,
0.028581418097019196,
-0.009557640179991722,
-0.0008451322792097926,
0.056... |
Bakkes/BakkesModWiki | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
model-index:
- name: output-tfg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output-tfg
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.024682465940713882,
-0.012003482319414616,
-0.006071851588785648,
0.03172198683023453,
0.03454766795039177,
0.014332246035337448,
-0.01834261231124401,
-0.0035666171461343765,
-0.03804594650864601,
0.05714636668562889,
0.023373417556285858,
-0.005176243372261524,
0.010021690279245377,
0... |
Bala/model_name | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: golightly/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.022230835631489754,
-0.005058188922703266,
0.010935217142105103,
0.03952799364924431,
0.03225184977054596,
0.014896944165229797,
-0.028327923268079758,
-0.01568193919956684,
-0.015519717708230019,
0.06115695461630821,
0.006292684935033321,
0.0009088490041904151,
0.01126394048333168,
0.0... |
BalajiSathesh/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 851.30 +/- 36.85
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.045824430882930756,
-0.000533600861672312,
-0.02136337384581566,
0.03198472782969475,
0.04336988553404808,
0.017248032614588737,
-0.018508736044168472,
-0.03036237321794033,
-0.03730723261833191,
0.06907293200492859,
0.02139224298298359,
0.002474032575264573,
0.015619015321135521,
0.027... |
BatuhanYilmaz/dummy | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.86 +/- 3.98
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r YoanG/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
-0.044792480766773224,
-0.0031133433803915977,
0.011051454581320286,
0.03878336399793625,
0.02526760846376419,
-0.01034107431769371,
-0.010900041088461876,
-0.026822829619050026,
-0.03814319893717766,
0.055217765271663666,
0.03681774437427521,
0.0011946667218580842,
0.020581167191267014,
0... |
Beatriz/model_name | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- MontezumaRevenge-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MontezumaRevenge-v5
type: MontezumaRevenge-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MontezumaRevenge-v5**
This is a trained model of a PPO agent playing MontezumaRevenge-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id MontezumaRevenge-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MontezumaRevenge-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MontezumaRevenge-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.021663039922714233,
-0.0017968988977372646,
-0.00824796874076128,
0.04267508536577225,
0.04049060866236687,
0.003333748783916235,
-0.0015362461563199759,
-0.026278555393218994,
-0.025038113817572594,
0.06441210955381393,
0.02123837172985077,
-0.019821882247924805,
-0.00449115876108408,
... |
Bee-Garbs/DialoGPT-real-cartman-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- ChopperCommand-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ChopperCommand-v5
type: ChopperCommand-v5
metrics:
- type: mean_reward
value: 11510.00 +/- 4084.23
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5**
This is a trained model of a PPO agent playing ChopperCommand-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id ChopperCommand-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'ChopperCommand-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02896985039114952,
-0.007463948801159859,
-0.012883597984910011,
0.02448529377579689,
0.038043588399887085,
-0.007684994023293257,
-0.021773753687739372,
-0.021156804636120796,
-0.011447768658399582,
0.07262308150529861,
0.03440587595105171,
-0.013574197888374329,
-0.0032224624883383512,
... |
Beelow/model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1760.70 +/- 86.57
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04517984390258789,
-0.0008045750437304378,
-0.022212952375411987,
0.032218676060438156,
0.04356980696320534,
0.017729179933667183,
-0.018259672448039055,
-0.030833637341856956,
-0.03734579682350159,
0.06957847625017166,
0.02272847667336464,
0.0029200564604252577,
0.014756170101463795,
0... |
Benicio/t5-small-finetuned-en-to-ro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | I will give more info but this is how to generate text with the model.
You will need to install
```bash
pip install peft
```
To run in python
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftConfig, PeftModelForCausalLM
peft_model_id = 'GrantC/alpaca-opt-1.3b-lora'
BASE_MODEL = 'facebook/opt-1.3b'
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = PeftModelForCausalLM.from_pretrained(model, peft_model_id, device_map="auto")
prompt = "Write a blog post about shaving cream:"
print(prompt)
inputs = tokenizer(prompt, return_tensors='pt')
output = model.generate(input_ids=inputs["input_ids"], do_sample= True, penalty_alpha=0.6, top_k=4, max_new_tokens=256)
outputs = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(outputs)
``` | [
-0.005507692229002714,
-0.0023007707204669714,
-0.010932514443993568,
0.03679012507200241,
0.04825206100940704,
0.05210970342159271,
0.007672896608710289,
-0.018442049622535706,
-0.02956460416316986,
0.07818911969661713,
0.03984779492020607,
-0.0011265008943155408,
-0.0051427013240754604,
... |
BertChristiaens/EmojiPredictor | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-03-25T22:02:58Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
datasets: embed/EasyNegative
---
## Descriptions
This model is for reproducing delicate, beautiful flat-color ligne claire style anime pictures.
You can use tags like `ligne claire`, `lineart` or `monochrome` etc. to get more styles!
## Recommend settings:
- VAE: Orangemix / Anything V4.5 / NAI
- Sampler: DPM++ 2M Karras
- Sampling steps: 20
- Negative embedding: [EasyNegative](https://civitai.com/models/7808)、[badhandv4](https://civitai.com/models/16993/badhandv4-animeillustdiffusion)
## Samples
See: https://civitai.com/models/24387
## Models used
Merged with block weights tweaked:
- 2020s Anime Magazine Illustration Style
- Anime Lineart Style
- Avas Anime Hamster
- Beautiful Detailed Eyes
- Chillout Mix
- Epi Noise Offset
- Hipoly 3D Model
- Ligne Claire Anime Style
- Makoto Shinkai Substyles
- Mika Pikazo Style
- Pastel Mix Stylized Anime
- Tabi Art Style
- Thicker Lines Anime Style Mix | [
0.013060392811894417,
-0.02557990700006485,
-0.005075979512184858,
0.028932370245456696,
0.04682990908622742,
0.004828664008527994,
0.014241239055991173,
0.005083294585347176,
-0.010035969316959381,
0.06477721035480499,
0.01549870427697897,
-0.015382558107376099,
0.018526297062635422,
0.03... |
Biasface/DDDC | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
{}
---
LoRA weights for LLaMA-7b trained on a subset of the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset in which the long tail of lengthy entries are removed and the prompt is shortened to the following:
```
Appropriately respond to the following instruction:
### Instruction: Write a javascript function that sorts array alphabetically
### Response:
```
It doesn't contain the foundation model itself, so it's MIT licensed!
Tuned using https://github.com/lxe/simple-llama-finetuner | [
-0.030269421637058258,
-0.006066630128771067,
0.008821706287562847,
0.03899425268173218,
0.03536595031619072,
-0.00006393736111931503,
-0.011963782832026482,
-0.008794242516160011,
-0.013415772467851639,
0.03668207302689552,
0.0561697892844677,
-0.010176139883697033,
0.007726486772298813,
... |
BigSalmon/FormalBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-03-25T22:38:11Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1033.94 +/- 46.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04571649432182312,
-0.0012262219097465277,
-0.02179987169802189,
0.03228692710399628,
0.04352211579680443,
0.017943143844604492,
-0.018572762608528137,
-0.030803481116890907,
-0.037137676030397415,
0.06919460743665695,
0.022309601306915283,
0.0034475228749215603,
0.014951803721487522,
0... |
BigSalmon/FormalBerta3 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-03-25T22:39:05Z | ---
tags:
- Riverraid-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Riverraid-v5
type: Riverraid-v5
metrics:
- type: mean_reward
value: 3498.00 +/- 125.76
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Riverraid-v5**
This is a trained model of a PPO agent playing Riverraid-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Riverraid-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Riverraid-v5 --seed 10
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Riverraid-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 10,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.025014808401465416,
-0.011683310382068157,
-0.008119185455143452,
0.034107524901628494,
0.03902602940797806,
-0.0072792405262589455,
-0.022801941260695457,
-0.032625727355480194,
-0.017732949927449226,
0.05932100489735603,
0.014308653771877289,
-0.012546523474156857,
0.007588449399918318,... |
BigSalmon/FormalRobertaa | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distingroberta-base-mrpc-glue-pixelciosa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8919382504288165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distingroberta-base-mrpc-glue-pixelciosa
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4939
- Accuracy: 0.8456
- F1: 0.8919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5223 | 1.09 | 500 | 0.4939 | 0.8456 | 0.8919 |
| 0.375 | 2.18 | 1000 | 0.6612 | 0.8407 | 0.8873 |
| 0.1932 | 3.27 | 1500 | 0.7584 | 0.8627 | 0.9011 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.028900889679789543,
-0.00028173645841889083,
-0.006051661446690559,
0.028661875054240227,
0.06631424278020859,
0.017194271087646484,
0.005318103823810816,
0.005441895220428705,
-0.05016673356294632,
0.059973523020744324,
0.013890850357711315,
-0.030970066785812378,
-0.0007147969445213675,... |
BigSalmon/GPTIntro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.698
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
- Rouge1: 47.698
- Rouge2: 23.8078
- Rougel: 40.1138
- Rougelsum: 43.7749
- Gen Len: 17.2759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3822 | 47.3182 | 23.8486 | 39.7145 | 43.5756 | 17.0256 |
| 1.3572 | 2.0 | 3684 | 1.3747 | 47.5891 | 23.6341 | 39.7983 | 43.6862 | 17.4347 |
| 1.2822 | 3.0 | 5526 | 1.3721 | 47.698 | 23.8078 | 40.1138 | 43.7749 | 17.2759 |
| 1.2375 | 4.0 | 7368 | 1.3764 | 47.7671 | 24.1413 | 40.1597 | 43.9313 | 17.2943 |
| 1.1935 | 5.0 | 9210 | 1.3781 | 47.626 | 23.7564 | 39.844 | 43.7166 | 17.3077 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.019771317020058632,
-0.012554524466395378,
0.001619459129869938,
0.04339687153697014,
0.036798324435949326,
0.007168421056121588,
-0.005429239012300968,
-0.025337575003504753,
-0.042258646339178085,
0.0460885688662529,
0.032764732837677,
-0.02080664411187172,
0.001782805542461574,
0.017... |
BigSalmon/GPTNeo350MInformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: harshil128/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.05120149627327919,
0.0008050532196648419,
-0.005609163548797369,
0.049732375890016556,
0.026041770353913307,
0.031372133642435074,
-0.01104141864925623,
-0.022259222343564034,
-0.0002945324231404811,
0.050905078649520874,
0.026041969656944275,
-0.013577762059867382,
0.00741358520463109,
... |
BigSalmon/InformalToFormalLincoln25 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-03-25T23:29:13Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: clemdev2000/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.029330501332879066,
-0.0047383299097418785,
-0.0177165400236845,
0.05208871513605118,
0.03556349501013756,
0.025414379313588142,
-0.001448269234970212,
-0.034464865922927856,
-0.025673696771264076,
0.04669329524040222,
0.026086166501045227,
-0.008649320341646671,
0.018692325800657272,
0... |
BigSalmon/MrLincoln | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-03-25T23:43:30Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: clemdev2000/MLAgents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.051794975996017456,
0.0010273547377437353,
-0.0050014108419418335,
0.05065123736858368,
0.02478402480483055,
0.02953280881047249,
-0.011398660019040108,
-0.022672249004244804,
-0.0009737039217725396,
0.05065422132611275,
0.02508944645524025,
-0.013214228674769402,
0.007535099517554045,
... |
BigSalmon/MrLincoln10 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-03-25T23:47:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.85 +/- 24.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.037278130650520325,
-0.002913010772317648,
-0.005015345755964518,
0.025949733331799507,
0.04538716748356819,
-0.021462639793753624,
-0.005808066576719284,
-0.028219053521752357,
-0.03292946517467499,
0.06659404933452606,
0.032545849680900574,
-0.02356349304318428,
0.022770479321479797,
... |
BigSalmon/MrLincoln11 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2023-03-25T23:48:12Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.49 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.049749042838811874,
-0.016300125047564507,
-0.008717065677046776,
0.03643500804901123,
0.04119068756699562,
0.002968122949823737,
-0.021221935749053955,
-0.010411056689918041,
-0.03795416280627251,
0.05694544315338135,
0.024198630824685097,
-0.0029822622891515493,
0.03175351023674011,
0... |
BigSalmon/MrLincoln13 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- TimePilot-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TimePilot-v5
type: TimePilot-v5
metrics:
- type: mean_reward
value: 10940.00 +/- 1704.23
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **TimePilot-v5**
This is a trained model of a PPO agent playing TimePilot-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id TimePilot-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id TimePilot-v5 --seed 10
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'TimePilot-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 10,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.009864234365522861,
-0.009060212410986423,
-0.009020563215017319,
0.0234452523291111,
0.03399919718503952,
0.00016039937327150255,
-0.011995293200016022,
-0.018396276980638504,
-0.018106073141098022,
0.07498341053724289,
0.03834835812449455,
-0.014053123071789742,
-0.006505600642412901,
... |
BigSalmon/MrLincoln14 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 582.50 +/- 218.59
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga artbreguez -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga artbreguez -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga artbreguez
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| [
-0.03771292045712471,
-0.016764583066105843,
-0.015093378722667694,
0.03712395951151848,
0.04802761226892471,
-0.004429464228451252,
-0.014886351302266121,
-0.02548869512975216,
-0.028841273859143257,
0.054119180887937546,
0.021138207986950874,
-0.03140478953719139,
0.017232853919267654,
0... |
BigSalmon/MrLincoln3 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
tags:
- Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Qbert-v5
type: Qbert-v5
metrics:
- type: mean_reward
value: 15060.00 +/- 130.96
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Qbert-v5**
This is a trained model of a PPO agent playing Qbert-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Qbert-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Qbert-v5 --seed 10
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Qbert-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 10,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.00968882255256176,
-0.003173989010974765,
-0.019002525135874748,
0.027714546769857407,
0.043063562363386154,
-0.010364963673055172,
-0.012748640030622482,
-0.030284423381090164,
-0.022074976935982704,
0.061596062034368515,
0.004647132474929094,
-0.013028840534389019,
0.005055049434304237,... |
BigSalmon/MrLincoln6 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- RoadRunner-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoadRunner-v5
type: RoadRunner-v5
metrics:
- type: mean_reward
value: 53360.00 +/- 7575.65
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **RoadRunner-v5**
This is a trained model of a PPO agent playing RoadRunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id RoadRunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id RoadRunner-v5 --seed 10
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'RoadRunner-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 10,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.005137154366821051,
-0.005952234845608473,
-0.02913896180689335,
0.00698507996276021,
0.04943525046110153,
0.0007733998936600983,
-0.005257240496575832,
-0.022728338837623596,
-0.018711822107434273,
0.06626737117767334,
0.021460868418216705,
-0.006004618480801582,
0.006984592881053686,
... |
BigSalmon/MrLincoln8 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Enduro-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Enduro-v5
type: Enduro-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02055790089070797,
-0.008516515605151653,
-0.0055493637919425964,
0.023768626153469086,
0.05022072792053223,
-0.0019220731919631362,
-0.01158470381051302,
-0.027780551463365555,
-0.03743912652134895,
0.07305201888084412,
0.00916452705860138,
-0.021619901061058044,
0.00902222003787756,
0... |
BigSalmon/MrLincolnBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Enduro-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Enduro-v5
type: Enduro-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.020865658298134804,
-0.008632798679172993,
-0.0053124865517020226,
0.023747043684124947,
0.050062015652656555,
-0.002093174494802952,
-0.011478858068585396,
-0.027609853073954582,
-0.037490542978048325,
0.07306105643510818,
0.008982278406620026,
-0.02163451910018921,
0.00897397380322218,
... |
BigSalmon/Neo | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: creativeml-openrail-m
---
Anime extract will tune generations towards flat shaded anime-like look.
This is an extracted lora of a merge done by a discord user DarkSide from [detailedproject](https://huggingface.co/closertodeath/detailedproject) and animelike 2.5D. Similar effort can be found [here](https://civitai.com/models/24330/a1-filter). In comparison the A1Filter, this extract tunes to a flat style more effectively.
This extract was done against aom2_nsfw and the soft version was done against novel's model.
CivitAI: https://civitai.com/models/24796 | [
-0.025291835889220238,
-0.011863523162901402,
-0.006265323609113693,
0.03746536001563072,
0.034865498542785645,
0.00519909942522645,
-0.008126880042254925,
-0.003917265217751265,
-0.00740458769723773,
0.0739479586482048,
0.03798244893550873,
0.004436394665390253,
0.012303660623729229,
0.02... |
BigSalmon/ParaphraseParentheses | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- FishingDerby-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FishingDerby-v5
type: FishingDerby-v5
metrics:
- type: mean_reward
value: 26.50 +/- 10.01
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.01564687117934227,
-0.007386222947388887,
0.007482185028493404,
0.03756056725978851,
0.04961472749710083,
0.00048425394925288856,
-0.034965649247169495,
-0.035717885941267014,
-0.029753578826785088,
0.0709616094827652,
0.025083065032958984,
-0.01827271468937397,
-0.005126248113811016,
0... |
BigSalmon/ParaphraseParentheses2.0 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- FishingDerby-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FishingDerby-v5
type: FishingDerby-v5
metrics:
- type: mean_reward
value: 29.50 +/- 6.05
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.015206148847937584,
-0.007486033719033003,
0.0075560640543699265,
0.037307608872652054,
0.04973314329981804,
0.000860267726238817,
-0.0358663909137249,
-0.035883285105228424,
-0.02982850931584835,
0.07067953795194626,
0.025435123592615128,
-0.018739452585577965,
-0.005173505283892155,
0... |
BigSalmon/PhraseBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- FishingDerby-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FishingDerby-v5
type: FishingDerby-v5
metrics:
- type: mean_reward
value: 28.00 +/- 8.73
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.015586348250508308,
-0.007016659248620272,
0.008281097747385502,
0.03746417537331581,
0.049188967794179916,
0.0005709300166927278,
-0.03620358929038048,
-0.035230450332164764,
-0.029672930017113686,
0.07080543041229248,
0.025180330500006676,
-0.018223769962787628,
-0.005337764509022236,
... |
BigSalmon/Points | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2023-03-26T00:12:19Z | ---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -10.80 +/- 3.92
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02165873348712921,
-0.0007680486887693405,
0.004648970905691385,
0.02233501337468624,
0.038707904517650604,
-0.019024327397346497,
-0.002595282392576337,
-0.028330156579613686,
-0.020943189039826393,
0.055321406573057175,
0.021185141056776047,
-0.01949879713356495,
0.009948568418622017,
... |
BigSalmon/Robertsy | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-03-26T00:12:54Z | ---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -9.40 +/- 4.39
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.021478397771716118,
-0.0007684580632485449,
0.004766956437379122,
0.022677946835756302,
0.03865676373243332,
-0.019163278862833977,
-0.002510653343051672,
-0.0282419566065073,
-0.02093815989792347,
0.05512513592839241,
0.02143433503806591,
-0.019842755049467087,
0.009671878069639206,
0.... |
BigSalmon/Rowerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.01829620823264122,
-0.01455596648156643,
-0.020725267007946968,
0.015718858689069748,
0.03099759668111801,
0.009802134707570076,
-0.016240041702985764,
-0.0048437174409627914,
-0.0306804608553648,
0.06072993203997612,
0.03796859830617905,
-0.012395087629556656,
0.009350210428237915,
0.0... |
BigSalmon/T5F | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | 2023-03-26T00:14:47Z | ---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -5.80 +/- 5.02
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.0212156530469656,
-0.0006622181390412152,
0.004795599728822708,
0.022250521928071976,
0.038608748465776443,
-0.01885409839451313,
-0.002855213126167655,
-0.02825678326189518,
-0.02123526856303215,
0.05535511299967766,
0.021396653726696968,
-0.01957034133374691,
0.01012621633708477,
0.02... |
BigSalmon/TS3 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -179.38 +/- 79.59
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Max100ce/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
| [
-0.00710943853482604,
0.004920728970319033,
-0.01767035946249962,
0.015282419510185719,
0.05731649324297905,
-0.02931240014731884,
0.009393698535859585,
-0.03156890347599983,
-0.028701474890112877,
0.06788612902164459,
0.024439772590994835,
-0.031386848539114,
-0.005577549804002047,
0.0273... |
BigSalmon/prepositions | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-03-26T00:25:16Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.24 +/- 0.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.049949973821640015,
-0.016453241929411888,
-0.008516556583344936,
0.03614576905965805,
0.040838733315467834,
0.0028271302580833435,
-0.021135898306965828,
-0.01047163549810648,
-0.03744567930698395,
0.0575186163187027,
0.024761265143752098,
-0.0036615324206650257,
0.031748056411743164,
... |
BigTooth/DialoGPT-Megumin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | 2023-03-26T00:25:40Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.08 +/- 0.49
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04967503622174263,
-0.01603759452700615,
-0.008736022748053074,
0.036497924476861954,
0.041199106723070145,
0.003123653819784522,
-0.021397989243268967,
-0.010558065958321095,
-0.03799932822585106,
0.05692253261804581,
0.02439033053815365,
-0.002832514001056552,
0.03198268637061119,
0.0... |
BigTooth/DialoGPT-small-tohru | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-03-26T00:28:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: smashing-sexism-robert-weighted-final-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smashing-sexism-robert-weighted-final-2
This model is a fine-tuned version of [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9887 | 0.1 | 400 | 0.9251 |
| 0.9326 | 0.21 | 800 | 1.0643 |
| 0.8767 | 0.31 | 1200 | 0.8270 |
| 0.9989 | 0.41 | 1600 | 1.0447 |
| 0.8717 | 0.51 | 2000 | 0.8382 |
| 0.8298 | 0.62 | 2400 | 0.8867 |
| 0.9462 | 0.72 | 2800 | 0.8950 |
| 0.8885 | 0.82 | 3200 | 0.8633 |
| 0.9317 | 0.92 | 3600 | 0.8930 |
| 0.7629 | 1.03 | 4000 | 1.1367 |
| 0.7152 | 1.13 | 4400 | 0.9594 |
| 0.66 | 1.23 | 4800 | 0.9411 |
| 0.6867 | 1.33 | 5200 | 1.1500 |
| 0.6281 | 1.44 | 5600 | 0.9684 |
| 0.6442 | 1.54 | 6000 | 1.1268 |
| 0.6769 | 1.64 | 6400 | 0.9762 |
| 0.7184 | 1.74 | 6800 | 0.8957 |
| 0.58 | 1.85 | 7200 | 0.9875 |
| 0.5751 | 1.95 | 7600 | 1.2363 |
| 0.4031 | 2.05 | 8000 | 1.3173 |
| 0.3862 | 2.15 | 8400 | 1.3331 |
| 0.5009 | 2.26 | 8800 | 1.4265 |
| 0.4591 | 2.36 | 9200 | 1.5329 |
| 0.4284 | 2.46 | 9600 | 1.3033 |
| 0.5236 | 2.56 | 10000 | 1.2444 |
| 0.5135 | 2.67 | 10400 | 1.2472 |
| 0.5369 | 2.77 | 10800 | 1.6505 |
| 0.4701 | 2.87 | 11200 | 1.3840 |
| 0.5371 | 2.97 | 11600 | 1.3600 |
| 0.2557 | 3.08 | 12000 | 1.4148 |
| 0.2952 | 3.18 | 12400 | 1.7975 |
| 0.2098 | 3.28 | 12800 | 2.0480 |
| 0.236 | 3.38 | 13200 | 1.9231 |
| 0.2414 | 3.49 | 13600 | 1.6038 |
| 0.387 | 3.59 | 14000 | 1.6627 |
| 0.3059 | 3.69 | 14400 | 1.5931 |
| 0.2872 | 3.79 | 14800 | 1.5828 |
| 0.1751 | 3.9 | 15200 | 1.9071 |
| 0.2429 | 4.0 | 15600 | 1.6990 |
| 0.164 | 4.1 | 16000 | 1.9178 |
| 0.0941 | 4.2 | 16400 | 2.1213 |
| 0.1948 | 4.31 | 16800 | 2.0160 |
| 0.1442 | 4.41 | 17200 | 2.0305 |
| 0.2209 | 4.51 | 17600 | 1.9717 |
| 0.1375 | 4.61 | 18000 | 2.0309 |
| 0.1995 | 4.72 | 18400 | 2.0615 |
| 0.1421 | 4.82 | 18800 | 2.0320 |
| 0.2076 | 4.92 | 19200 | 1.9974 |
| 0.0748 | 5.02 | 19600 | 1.9942 |
| 0.0689 | 5.13 | 20000 | 2.1029 |
| 0.0841 | 5.23 | 20400 | 2.2356 |
| 0.0782 | 5.33 | 20800 | 2.2074 |
| 0.1662 | 5.43 | 21200 | 2.3315 |
| 0.0415 | 5.54 | 21600 | 2.5986 |
| 0.0731 | 5.64 | 22000 | 2.2913 |
| 0.0851 | 5.74 | 22400 | 2.4306 |
| 0.0923 | 5.84 | 22800 | 2.4737 |
| 0.099 | 5.95 | 23200 | 2.2077 |
| 0.0297 | 6.05 | 23600 | 2.2406 |
| 0.0365 | 6.15 | 24000 | 2.5536 |
| 0.0131 | 6.25 | 24400 | 2.7311 |
| 0.0838 | 6.36 | 24800 | 2.3021 |
| 0.0392 | 6.46 | 25200 | 2.4769 |
| 0.0357 | 6.56 | 25600 | 2.4404 |
| 0.0955 | 6.66 | 26000 | 2.4813 |
| 0.1119 | 6.77 | 26400 | 2.3819 |
| 0.0916 | 6.87 | 26800 | 2.5341 |
| 0.1437 | 6.97 | 27200 | 2.2940 |
| 0.0333 | 7.08 | 27600 | 2.4652 |
| 0.0276 | 7.18 | 28000 | 2.5684 |
| 0.0306 | 7.28 | 28400 | 2.4722 |
| 0.0248 | 7.38 | 28800 | 2.7375 |
| 0.0199 | 7.49 | 29200 | 2.7708 |
| 0.0443 | 7.59 | 29600 | 2.7067 |
| 0.0119 | 7.69 | 30000 | 2.6394 |
| 0.0606 | 7.79 | 30400 | 2.5045 |
| 0.0467 | 7.9 | 30800 | 2.3479 |
| 0.0438 | 8.0 | 31200 | 2.7489 |
| 0.0033 | 8.1 | 31600 | 2.6423 |
| 0.0306 | 8.2 | 32000 | 2.5070 |
| 0.033 | 8.31 | 32400 | 2.7068 |
| 0.0114 | 8.41 | 32800 | 2.7400 |
| 0.0032 | 8.51 | 33200 | 2.5803 |
| 0.0305 | 8.61 | 33600 | 2.8058 |
| 0.0253 | 8.72 | 34000 | 2.5497 |
| 0.0183 | 8.82 | 34400 | 2.5782 |
| 0.0651 | 8.92 | 34800 | 2.7173 |
| 0.0345 | 9.02 | 35200 | 2.5939 |
| 0.0206 | 9.13 | 35600 | 2.6243 |
| 0.0018 | 9.23 | 36000 | 2.5503 |
| 0.0484 | 9.33 | 36400 | 2.7006 |
| 0.0359 | 9.43 | 36800 | 2.6202 |
| 0.006 | 9.54 | 37200 | 2.6260 |
| 0.0205 | 9.64 | 37600 | 2.7143 |
| 0.0153 | 9.74 | 38000 | 2.6923 |
| 0.0342 | 9.84 | 38400 | 2.6475 |
| 0.011 | 9.95 | 38800 | 2.6381 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.030137483030557632,
0.00047060439828783274,
-0.002195416484028101,
0.03718562051653862,
0.03751480579376221,
0.0071991910226643085,
-0.013058806769549847,
-0.019008943811058998,
-0.055864233523607254,
0.05724367871880531,
0.021715229377150536,
-0.0056671560741961,
0.036531712859869,
0.0... |
BigeS/DialoGPT-small-Rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-03-26T00:30:14Z | ---
tags:
- Tutankham-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Tutankham-v5
type: Tutankham-v5
metrics:
- type: mean_reward
value: 245.30 +/- 16.30
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Tutankham-v5**
This is a trained model of a PPO agent playing Tutankham-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Tutankham-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed10/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Tutankham-v5 --seed 10
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Tutankham-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 10,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.008500318042933941,
-0.011277124285697937,
-0.012468668632209301,
0.03037998639047146,
0.045647237449884415,
0.0011399188078939915,
-0.017357978969812393,
-0.0219741053879261,
-0.018889227882027626,
0.06533312052488327,
0.022418873384594917,
-0.01964646205306053,
0.0012306582648307085,
... |
BinksSachary/DialoGPT-small-shaxx | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Gopher-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gopher-v5
type: Gopher-v5
metrics:
- type: mean_reward
value: 1376.00 +/- 791.85
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gopher-v5**
This is a trained model of a PPO agent playing Gopher-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Gopher-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Gopher-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.02237728238105774,
0.000896273588296026,
-0.01622355729341507,
0.030607862398028374,
0.051061395555734634,
0.005469856783747673,
-0.004499400500208139,
-0.029587754979729652,
-0.03267678618431091,
0.07199783623218536,
0.019180450588464737,
-0.024871446192264557,
0.006060888525098562,
0.... |
BinksSachary/ShaxxBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2023-03-26T00:39:20Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hftoken Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| [
-0.027940554544329643,
-0.013020386919379234,
-0.026354804635047913,
0.029597749933600426,
0.027156105265021324,
0.014463169500231743,
-0.00026541444822214544,
0.003449898213148117,
-0.01345816534012556,
0.03292081877589226,
0.03313653543591499,
0.013032612390816212,
-0.024516554549336433,
... |
BinksSachary/ShaxxBot2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-03-26T00:39:54Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510084170164355076/7f6ijkJo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Paint</div>
<div style="text-align: center; font-size: 14px;">@roach_collector</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Paint.
| Data | Paint |
| --- | --- |
| Tweets downloaded | 732 |
| Retweets | 0 |
| Short tweets | 105 |
| Tweets kept | 627 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ud9k8dh2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @roach_collector's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/asbhe8tu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/asbhe8tu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/roach_collector')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
0.006159534677863121,
-0.030099283903837204,
0.002500110073015094,
0.035803645849227905,
0.050648283213377,
0.01382402703166008,
-0.024691393598914146,
-0.017000684514641762,
-0.03288530558347702,
0.04024040326476097,
-0.008281727321445942,
-0.010645044967532158,
-0.0003187209367752075,
0.... |
Blabla/Pipipopo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Gopher-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gopher-v5
type: Gopher-v5
metrics:
- type: mean_reward
value: 12092.00 +/- 5138.32
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gopher-v5**
This is a trained model of a PPO agent playing Gopher-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Gopher-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gopher-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Gopher-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.022225232794880867,
0.00013492125435732305,
-0.01627877540886402,
0.030785512179136276,
0.05116862431168556,
0.005585938226431608,
-0.004682149738073349,
-0.029552122578024864,
-0.03248395770788193,
0.07178109884262085,
0.018592594191432,
-0.02489437162876129,
0.0063590737991034985,
0.0... |
Blaine-Mason/hackMIT-finetuned-sst2 | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 300.00 +/- 26.08
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.011953539215028286,
-0.00810913648456335,
-0.011187289841473103,
0.03125074878334999,
0.043451372534036636,
0.0039203655906021595,
-0.009921424090862274,
-0.026917366310954094,
-0.04591561481356621,
0.05377274006605148,
0.03138656169176102,
0.0006421583238989115,
0.015711963176727295,
0... |
Blerrrry/Kkk | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 310.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.012140360660851002,
-0.007926341146230698,
-0.01111400593072176,
0.0313987210392952,
0.04410601779818535,
0.003857565578073263,
-0.010516067035496235,
-0.02668333239853382,
-0.04553505778312683,
0.05364866554737091,
0.029803724959492683,
0.0006645826506428421,
0.01616707444190979,
0.040... |
BlightZz/DialoGPT-medium-Kurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 309.00 +/- 21.66
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.012145097367465496,
-0.007471712771803141,
-0.011607714928686619,
0.031217606738209724,
0.044041719287633896,
0.003656524233520031,
-0.010227277874946594,
-0.02703084610402584,
-0.04572483152151108,
0.0534757599234581,
0.03127647563815117,
0.0004337062709964812,
0.016139978542923927,
0.... |
BlightZz/MakiseKurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.018143007531762123,
-0.014345142990350723,
-0.020861774682998657,
0.015578685328364372,
0.031081726774573326,
0.00996746588498354,
-0.0162966325879097,
-0.004896059166640043,
-0.030652083456516266,
0.060604386031627655,
0.03815045952796936,
-0.01239757239818573,
0.009660083800554276,
0.... |
BlueGamerBeast/DialoGPT-small-joshua | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Gravitar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gravitar-v5
type: Gravitar-v5
metrics:
- type: mean_reward
value: 1875.00 +/- 845.95
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gravitar-v5**
This is a trained model of a PPO agent playing Gravitar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Gravitar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Gravitar-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.014880935661494732,
0.0020992436911910772,
-0.027401944622397423,
0.033501047641038895,
0.05003463476896286,
0.0024502857122570276,
-0.01765524409711361,
-0.02242935076355934,
-0.014419172890484333,
0.06642323732376099,
0.02996959164738655,
-0.02402007021009922,
0.009907850064337254,
0.... |
BobBraico/bert-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: shermansiu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.041913192719221115,
-0.002411218825727701,
-0.005686071235686541,
0.04700139909982681,
0.026212243363261223,
0.020045584067702293,
-0.026802748441696167,
-0.03186472877860069,
-0.005785549990832806,
0.04979271814227104,
0.02034086547791958,
-0.012475077994167805,
0.01829388737678528,
0.... |
BobBraico/distilbert-base-uncased-finetuned-imdb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- IceHockey-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IceHockey-v5
type: IceHockey-v5
metrics:
- type: mean_reward
value: 5.20 +/- 3.71
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **IceHockey-v5**
This is a trained model of a PPO agent playing IceHockey-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id IceHockey-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id IceHockey-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'IceHockey-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
| [
-0.004949582740664482,
-0.016224220395088196,
-0.013076708652079105,
0.0323563851416111,
0.04653916507959366,
-0.027345048263669014,
-0.01660674624145031,
-0.02123815380036831,
-0.041585616767406464,
0.0541335828602314,
0.013846703805029392,
-0.011712085455656052,
0.025281066074967384,
0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.