modelId stringlengths 4 81 | tags list | pipeline_tag stringclasses 17 values | config dict | downloads int64 0 59.7M | first_commit timestamp[ns, tz=UTC] | card stringlengths 51 438k | embedding list |
|---|---|---|---|---|---|---|---|
Al/mymodel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 10.39 +/- 65.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo-lunarlander'
'seed': 42
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'ppo-cleanrl-lunarlanderv2'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 8
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.95
'gae_lambda': 0.9
'num_minibatches': 8
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.3
'clip_vloss': True
'ent_coef': 0.02
'vf_coef': 0.7
'max_grad_norm': 0.8
'target_kl': 0.01
'repo_id': 'eryzml/ppo-LunarLander-v2-CleanRL'
'batch_size': 2048
'minibatch_size': 256}
```
| [
-0.00797932967543602,
0.002766404300928116,
-0.018551331013441086,
0.017130449414253235,
0.0601448155939579,
-0.027358753606677055,
0.007431551814079285,
-0.036534398794174194,
-0.027831675484776497,
0.06622229516506195,
0.03242829814553261,
-0.028088955208659172,
-0.0025521046482026577,
0... |
AlErysvi/Erys | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gbert-large-finetuned-cust18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-large-finetuned-cust18
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.604 | 1.0 | 391 | 0.3560 |
| 0.3497 | 2.0 | 782 | 0.2838 |
| 0.2812 | 3.0 | 1173 | 0.2484 |
| 0.2452 | 4.0 | 1564 | 0.2232 |
| 0.2253 | 5.0 | 1955 | 0.2240 |
| 0.2202 | 6.0 | 2346 | 0.1993 |
| 0.1922 | 7.0 | 2737 | 0.1747 |
| 0.182 | 8.0 | 3128 | 0.1631 |
| 0.1609 | 9.0 | 3519 | 0.1555 |
| 0.1553 | 10.0 | 3910 | 0.1434 |
| 0.147 | 11.0 | 4301 | 0.1399 |
| 0.144 | 12.0 | 4692 | 0.1340 |
| 0.1307 | 13.0 | 5083 | 0.1319 |
| 0.128 | 14.0 | 5474 | 0.1490 |
| 0.1304 | 15.0 | 5865 | 0.1338 |
| 0.1165 | 16.0 | 6256 | 0.1233 |
| 0.1456 | 17.0 | 6647 | 0.1673 |
| 0.1419 | 18.0 | 7038 | 0.1591 |
| 0.1447 | 19.0 | 7429 | 0.1360 |
| 0.1317 | 20.0 | 7820 | 0.1232 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.019203878939151764,
0.014654843136668205,
-0.014421327039599419,
0.023238036781549454,
0.04408203437924385,
0.006472317967563868,
-0.02971368283033371,
-0.014426941983401775,
-0.024132046848535538,
0.03897459805011749,
0.015143622644245625,
-0.011048651300370693,
0.02244974486529827,
0.... |
Alaeddin/convbert-base-turkish-ner-cased | [
"pytorch",
"convbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"ConvBertForTokenClassification"
],
"model_type": "convbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.20 +/- 4.85
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Nasree/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
-0.0402790792286396,
0.0007417293381877244,
0.013315638527274132,
0.03487486019730568,
0.024647504091262817,
-0.00867491029202938,
-0.010923529975116253,
-0.02529478818178177,
-0.03783300146460533,
0.05680837482213974,
0.038085322827100754,
-0.00012539917952381074,
0.02094574272632599,
0.0... |
AlanDev/DallEMiniButBetter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model Stuprosur/luke-base-conll2003 is restricted and you are not in the authorized list. Visit https://huggingface.co/Stuprosur/luke-base-conll2003 to ask for access. | [
-0.030825870111584663,
0.0020307470113039017,
-0.031927403062582016,
-0.0006482356111519039,
0.045981515198946,
0.024601267650723457,
-0.018913941457867622,
0.0006310821627266705,
-0.0554487481713295,
0.039657577872276306,
0.06081718951463699,
-0.03272661939263344,
0.0005544184241443872,
0... |
AlanDev/dall-e-better | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-18T10:51:49Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.31 +/- 4.45
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cleth/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
-0.04330671578645706,
-0.0019302029395475984,
0.010732555761933327,
0.03737117722630501,
0.02589854598045349,
-0.012376943603157997,
-0.01078606303781271,
-0.026313044130802155,
-0.03911599889397621,
0.0558839850127697,
0.03623034805059433,
0.0014547992032021284,
0.017899299040436745,
0.02... |
AlanDev/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: torreygooch/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.040807392448186874,
-0.0021986046340316534,
-0.006883423309773207,
0.04772966355085373,
0.024873824790120125,
0.019755421206355095,
-0.02559661865234375,
-0.03227920085191727,
-0.003664253978058696,
0.049510616809129715,
0.020278114825487137,
-0.013878361321985722,
0.01867097243666649,
... |
AlbertHSU/BertTEST | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: dhorbach/hfc_ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.029089828953146935,
-0.007483943365514278,
-0.017042245715856552,
0.05256250128149986,
0.033439476042985916,
0.02794807404279709,
-0.003461623564362526,
-0.03361339867115021,
-0.024038616567850113,
0.04493206366896629,
0.02614184282720089,
-0.0049030110239982605,
0.018062520772218704,
0... |
Aleksandar/distilbert-srb-ner-setimes-lr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.37 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.049998439848423004,
-0.01611138880252838,
-0.00848323106765747,
0.03607002645730972,
0.04064635559916496,
0.002593850716948509,
-0.021110888570547104,
-0.010504149831831455,
-0.03750857710838318,
0.05755549296736717,
0.024569794535636902,
-0.003226431319490075,
0.03175237402319908,
0.00... |
Aleksandar/distilbert-srb-ner-setimes | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -26.75 +/- 72.19
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'seed': 1
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'num_envs': 8
'num_steps': 128
'gamma': 0.99
'gae_lambda': 0.95
'learning_rate': 0.0003
'clip_range': 0.2
'value_loss_coef': 0.5
'entropy_loss_coef': 0.01
'max_grad_norm': 0.5
'update_epochs': 4
'mini_batch_size': 64
'eval_freq': 10
'no_cuda': False
'repo_id': 'jrauch/ppo-{env}'}
```
| [
-0.010717052966356277,
0.009701437316834927,
-0.009954032488167286,
0.010473782196640968,
0.05789430812001228,
-0.031597644090652466,
0.011215804144740105,
-0.02858540043234825,
-0.023755047470331192,
0.06709631532430649,
0.027452737092971802,
-0.025748902931809425,
0.0004423485661391169,
... |
Aleksandar/electra-srb-ner-setimes-lr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: NiallRooney/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NiallRooney/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1463
- Validation Loss: 2.8485
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1463 | 2.8485 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.02471773512661457,
-0.0023848204873502254,
-0.010817586444318295,
0.026973096653819084,
0.04025393724441528,
0.017957793548703194,
-0.02139848843216896,
-0.02169829048216343,
-0.0352872870862484,
0.06736769527196884,
0.029155367985367775,
-0.027513112872838974,
0.030229615047574043,
0.0... |
Aleksandar/electra-srb-oscar | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
datasets:
- magicgh/alpaca-cleaned
---
# LLaMa-13B LoRA Alpaca
This repo contains a low-rank adaptation (LoRA) finetuned model of LLaMa-13B on the Alpaca cleaned dataset.
This version of the weights was trained with the following hyperparameters:
* Epochs: 3
* Cutoff length: 512
* Learning rate: 3e-4
* Lora r: 8
* Lora alpha: 16
* Lora dropout: 0.05
* Lora target modules: q_proj, k_proj | [
-0.025356169790029526,
-0.006869803182780743,
-0.016621557995676994,
0.027985641732811928,
0.045161906629800797,
0.003178328275680542,
-0.014941351488232613,
0.008247161284089088,
-0.012213184498250484,
0.07072144746780396,
0.024185042828321457,
-0.026338040828704834,
0.03017416037619114,
... |
Aleksandar1932/distilgpt2-rock | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8704318936877077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3095
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.015447968617081642,
-0.010507614351809025,
-0.030900487676262856,
0.045737870037555695,
0.036214374005794525,
0.03704932704567909,
-0.019956396892666817,
-0.019939249381422997,
-0.036331724375486374,
0.06534610688686371,
0.046074993908405304,
-0.01893077790737152,
0.020578112453222275,
... |
Aleksandar1932/gpt2-pop | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638119036801455
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1368
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2564 | 1.0 | 525 | 0.1631 | 0.8187 |
| 0.1271 | 2.0 | 1050 | 0.1349 | 0.8546 |
| 0.081 | 3.0 | 1575 | 0.1368 | 0.8638 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.12.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.025794854387640953,
-0.0030857527162879705,
0.0072917272336781025,
0.018928037956357002,
0.029401395469903946,
0.025823213160037994,
-0.02368902787566185,
-0.010727642104029655,
-0.023046299815177917,
0.05071016401052475,
0.020896269008517265,
-0.04269265756011009,
0.010536354035139084,
... |
Aleksandar1932/gpt2-rock-124439808 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jjdelgado/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jjdelgado/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2544
- Validation Loss: 0.1951
- Train Accuracy: 0.9230
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2544 | 0.1951 | 0.9230 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
-0.034916263073682785,
-0.0225775558501482,
-0.0099159711971879,
0.01766415499150753,
0.031083084642887115,
0.015431268140673637,
-0.00934138149023056,
-0.021871980279684067,
-0.02740827016532421,
0.06845422089099884,
0.028433891013264656,
-0.014360632747411728,
0.004353306256234646,
0.025... |
AlekseyKulnevich/Pegasus-HeaderGeneration | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.94 +/- 19.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.037479281425476074,
-0.0025865358766168356,
-0.005683297291398048,
0.02556438185274601,
0.0456557534635067,
-0.021417340263724327,
-0.005524694453924894,
-0.027950040996074677,
-0.033309198915958405,
0.06649783998727798,
0.03285546973347664,
-0.023944245651364326,
0.022630125284194946,
... |
AlekseyKulnevich/Pegasus-Summarization | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- jax-diffusers-event
inference: true
---
# controlnet- JFoz/dog-pose
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
prompt: a small yorkshire terrier dog is sitting on a cushion

prompt: a yellow dog standing on a lawn

| [
-0.026970181614160538,
0.006027732510119677,
-0.02687651850283146,
0.02660812996327877,
0.04343167692422867,
0.015064711682498455,
0.0014397703344002366,
-0.009623264893889427,
-0.010126855224370956,
0.05680504068732262,
-0.011113278567790985,
-0.03467267379164696,
-0.004163063131272793,
0... |
AlexaMerens/Owl | [
"license:cc"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| [
-0.0292340237647295,
0.01823023334145546,
0.003660630900412798,
0.009176398627460003,
0.04400138556957245,
-0.01899772323668003,
-0.021622175350785255,
-0.015784479677677155,
-0.029898837208747864,
0.0846095085144043,
0.01731882616877556,
-0.008446605876088142,
0.017369555309414864,
0.0164... |
Alireza1044/albert-base-v2-mrpc | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 204 | null | ---
license: creativeml-openrail-m
language:
- ja
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
---
**【告知】**
**chilled_remix及びreversemixは2023年5月21日にVersion変更を行い、v2へ移行いたしました。**
**伴いv1は削除致しました。なお既にDL済みの方は引き続き、v1をご利用いただくことは問題ございません。**
License:[CreativeML Open RAIL-M](https://huggingface.co/sazyou-roukaku/chilled_remix/blob/main/license_v2.txt)<br>
Additional Copyright: sazyou_roukaku (TwitterID [@sazyou_roukaku](https://twitter.com/sazyou_roukaku)) as of May 21, 2023<br>
このモデルは『CreativeML Open RAIL-M』でLicenseそのものに変更はありません。<br>
しかし追加著作者として鎖城郎郭の名前が追加されています。<br>
なお『CreativeML Open RAIL-M』に記載されている通り、<br>
本モデルを使用しての生成物に関してはLicenseの使用制限Aの事例を除き、当方は一切関与致しません。<br>
犯罪目的利用や医療用画像など特定専門的な用途での利用は使用制限Aで禁止されています。<br>
必ず確認しご利用ください。<br>
また当方は一切責任を持ちません。免責されていることをご了承の上、ご使用ください。<br>
<h4>制限</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
著作者表記を入れずにモデルを使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
商用画像生成サービスに、このモデルを使用する<br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有・配布する<br>
Share merges using this model
</td>
</tr>
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
このモデル、または派生モデルを販売する<br>
Sell this model or merges using this model
</td>
</tr>
<tr>
<td class="align-middle px-4 w-8">
<span class="text-green-500">
<h5>OK</h5>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する<br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
なお、上記のモデルそのものの販売や商用画像生成サービスへの利用は、<br>
『CreativeML Open RAIL-M』のLicense上、使用制限Aに追記記載しない限り、<br>
制限することが本来できない為、マージ者への負担も考慮し、civitai制限表記上OKとしているだけであり、<br>
積極的な推奨は行っておらず、またそれにより何らかの問題が生じても当方は一切責任を持ちません。<br>
その点、ご留意いただくようお願いいたします。<br>
<br>
**推奨設定・モデルの違い・プロンプト**
Version2はfp16でVAE焼き込み版のみ配布といたしました。
基本的には**chilled_remixをメイン**とし、好みに合わせてreversemixも検討というのがスタンスです。
※chilled_remixはchilled_re-genericユーザーをある騒動での混乱から守るために生み出されたモデルです。
性質上全てのユーザー出力に対応できなかった為、サブとしてreversemixが作られました。
reversemixはLORAなしでも顔のセミリアル感は薄いですが、全体的に幼くなる傾向があります。
chilled_remixはLORA愛用者の多いchilled_re-genericユーザー向けに生み出された為、
顔はLORAを使うとリアル感が一定になるよう設計されています。
プロンプトだけでもリアル化は可能ですが、LORAを少し使ってリアル化したほうが簡単です。
**CLIP設定:clip skip:2**を推奨。
badhand系のnegativeTI無し、手系のネガティブも入れない出力と、
badhand系のnegativeTIを使った場合、正直大差ない感覚があります。
お好みでご利用ください。
自然言語的な文章プロンプトにかなり強いですが、シチュエーション以外の詳しい顔造形などは、
好みに合わせてワードプロンプトで指定するのが私のスタイルです。
ワードだけ構成でも問題があるわけではないので使いやすいスタイルで使ってください。
クオリティプロンプトは、high qualityなどは有効性を感じていません。
masterpieceは顔造形が変化する感覚ありますが、クオリティアップとしては微妙です。
ただhigh resolutionは背景や質感に効果あります。high res、Hiresなど色々ありますが、
一番high resolutionを信頼しています。
私が必ず入れるプロンプト
(symmetrical clear eyes:1.3)は絶対入れてます。
目の色等や他の追加と合わせて分割したりもしますが、このプロンプト自体は入れるのをデフォルトとしています。
愛用ネガティブプロンプトベース
```
nipple,(manicure:1.2),(worst quality:2),(low quality:2),(long neck:2),(undressing:1.5),
```
**マージ利用モデル一覧**
real-max-v3.4
(https://civitai.com/models/60188/real-max-v34) ©dawn6666
fantasticmix_v10(旧モデル名fantasticmixReal_v10)
(https://civitai.com/models/22402/fantasticmixreal) ©michin
dreamshaper_5Bakedvae
(https://civitai.com/models/4384/dreamshaper) ©Lykon
epicrealism_newAge
(https://civitai.com/models/25694) ©epinikion
diamondCoalMix_diamondCoalv2
(https://civitai.com/models/41415) ©EnthusiastAI
**FAQ**
**Q1:何故v2を公開し、v1の配布を中止したのか**
**A2:**
v1は元々マージ後も制限変更を禁止する表記になっているモデル(**realbiter_v10**)を使用していた為、
NG:Have different permissions when sharing mergesというcivitai制限を継承していました。
これは制限を追加することも解除することも不可という意味に取れます。一方でその他は全てOKでした。
つまり例えば
*NG:Sell this model or merges using this model*
*NG:Have different permissions when sharing merges*
こういうモデルとマージした時に**制限の矛盾**が発生し、**理屈上公開不可**という問題がありました。
マージをする者にとってこれは非常に厄介な制限で、また『CreativeML Open RAIL-M』にある
**Licenseを逸脱しない範囲であれば制限等を追加することができる**という文言にも抵触しています。
これが非常に気持ち悪く、嫌でした。
今回はその制限を解除する為のVersionアップです。
**v1の配布中止は、制限が異なる為、ややこしくトラブルの原因となる可能性がある点。**
また『CreativeML Open RAIL-M』には
**『更新に伴い、基本的に使用者は最新版を使う努力をすること』** の文面があります。
権利者は最新版を使わせるようにする権利を持ち、使用者は努力義務があるという内容です。
**ただし私はこの権利を行使致しませんので引き続きv1をお使いいただくことは問題ありません。**
しなしながらこの文面があるのに旧版を公開し続けるのは合理性に欠けることもあり、
誠に勝手ながら公開終了とさせていただきました。
ご理解のほどよろしくお願いいたします。
なおv1の再配布等は『CreativeML Open RAIL-M』に準拠致します。
**Q2:今回の制限に問題や矛盾はないのか。**
**A2:fantasticmix_v10**、**diamondCoalMix_diamondCoalv2**、**dreamshaper_5Bakedvae**は
**OK:Have different permissions when sharing merges**となっており解除可能。
**epicrealism_newAge**と**real-max-v3.4**は制限なしの為、今回全て制限なしとし公開しております。
なおマージ利用モデル側にLicense変更・制限変更等が生じた際も
5/17時点のLicenseや制限を前提として公開している為、creativeml-openrail-mに準じます。
こちらはMergeModel_LicenseSS_v2に該当モデルのSSを保管しております。
なおマージ利用モデル側に重大な問題が発生した場合は、モデルの公開停止を行い、
利用停止を呼びかける可能性はありますが、**当方側を理由とした追加制限を設けることは致しません。**
<br>
<br>
<br>
<br>
<br>
<br>
**----------------------------下記は旧Version向け情報です------------------------**
**chilled_remix_v1/chilled_reversemix_v1**に関して最低限の記載を残します。
詳しい内容が必要な場合は編集履歴にて当時の記載をご確認ください。
またMergeModel_LicenseSSに該当モデルの制限に関してSSを残しております。
License:[CreativeML Open RAIL-M](https://huggingface.co/sazyou-roukaku/chilled_remix/blob/main/license.txt)<br>
Additional Copyright: sazyou_roukaku (TwitterID [@sazyou_roukaku](https://twitter.com/sazyou_roukaku)) as of April 18, 2023
このモデルは『CreativeML Open RAIL-M』でLicenseそのものに変更はありません。
しかし追加著作者として鎖城郎郭の名前が追加されています。
なおcreativeml-openrail-mに記載されている通り、 本モデルを使用しての生成物に関しては使用制限Aの事例を除き、当方は一切関与致しません。
また一切責任を持ちません。免責されていることをご了承の上、ご使用ください。
**制限**
| Allowed | Permission |
|:-------:|-----------------------------------------------------|
| OK | Use the model without crediting the creator |
| OK | Sell images they generate |
| OK | Run on services that generate images for money |
| OK | Share merges using this model |
| OK | Sell this model or merges using this model |
| NG | Have different permissions when sharing merges |
| | | | [
0.016293242573738098,
-0.012937135994434357,
-0.01648392528295517,
0.028381239622831345,
0.047156013548374176,
0.021105119958519936,
0.004715483635663986,
-0.010738781653344631,
-0.02036697417497635,
0.04458651691675186,
0.015471866354346275,
-0.008205799385905266,
0.022135285660624504,
0.... |
Andrija/SRoBERTa-base-NER | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dipterv6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dipterv6
This model is a fine-tuned version of [ahmedrachid/FinancialBERT-Sentiment-Analysis](https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0300
- Accuracy: 0.9907
- F1: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.031718458980321884,
-0.014677086845040321,
-0.026508035138249397,
0.020593278110027313,
0.024856265634298325,
0.030956339091062546,
-0.002333094598725438,
-0.01652723364531994,
-0.04695138335227966,
0.059692248702049255,
0.04048600047826767,
-0.015603060834109783,
0.00934750959277153,
0... |
Andrija/SRoBERTa | [
"pytorch",
"roberta",
"fill-mask",
"hr",
"sr",
"multilingual",
"dataset:leipzig",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 88 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Apocalypse-19/ppo-Snowball
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.030443409457802773,
-0.005201241001486778,
-0.020683977752923965,
0.0522700659930706,
0.03713496774435043,
0.02721015177667141,
0.0009279105579480529,
-0.03678877279162407,
-0.0266539566218853,
0.04785686731338501,
0.030083637684583664,
-0.0052183847874403,
0.015370369888842106,
0.03311... |
Andrija/SRoBERTaFastBPE | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Apocalypse-19/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.052316706627607346,
0.00013324577594175935,
-0.009534897282719612,
0.051572930067777634,
0.026825247332453728,
0.03122084215283394,
-0.008877793326973915,
-0.02558203600347042,
-0.0032383492216467857,
0.052005838602781296,
0.030512236058712006,
-0.00957836490124464,
0.0026763570494949818,... |
Andy1621/uniformer | [
"license:mit",
"has_space"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This is a quantised version in safetensor format of the oasst-llama-13b-2-epochs model from dvruette/oasst-llama-13b-2-epochs
It has a siginficant speed up for inference when used on oobabooga.
Run with..
python server.py --model oasst-llama-13b-2-epochs-GPTQ-4bit-128g --wbits 4 --groupsize 128
| [
-0.042642440646886826,
-0.017556462436914444,
-0.008521171286702156,
-0.01039094477891922,
0.06789209693670273,
0.0012928270734846592,
0.015203006565570831,
0.017576707527041435,
-0.04278028756380081,
0.03541653975844383,
0.0431511327624321,
-0.005863500759005547,
0.031175779178738594,
0.0... |
AndyJ/prompt_finetune | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: deep-rl-class-q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sebschub/deep-rl-class-q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| [
-0.02143682911992073,
-0.01590372435748577,
-0.008387772366404533,
0.023197906091809273,
0.04979657009243965,
-0.0004414650029502809,
-0.02378174103796482,
0.004110678564757109,
-0.034472689032554626,
0.05348220467567444,
0.017694830894470215,
-0.008246203884482384,
0.010512583889067173,
0... |
Ani123/Ani | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Tevatron
```
bs=512
epoch=40
save_steps=4000
backbone=bert-base-multilingual-cased
output_dir=mlm.bs-$bs.epoch-$epoch.$backbone
WANDB_PROJECT=mlm-mrtydi-DDR \
python examples/dense-adapter/dense-adapter-train.py \
--output_dir $output_dir \
--model_name_or_path $backbone \
--tokenizer_name bert-base-multilingual-cased \
--save_steps $save_steps \
--dataset_name Tevatron/msmarco-passage \
--fp16 \
--per_device_train_batch_size $bs \
--train_n_passages 2 \
--learning_rate 1e-5 \
--q_max_len 32 \
--p_max_len 128 \
--num_train_epochs $epoch \
--logging_steps 100 \
--overwrite_output_dir \
--dataloader_num_workers 4 \
``` | [
-0.05246732383966446,
0.0012004728196188807,
-0.0056147449649870396,
0.045154500752687454,
0.057022541761398315,
0.03118421509861946,
-0.00989964883774519,
-0.012412892654538155,
-0.026450878009200096,
0.03666210547089577,
0.012190907262265682,
-0.009299340657889843,
0.01731160283088684,
0... |
Anirbanbhk/Hate-speech-Pretrained-movies | [
"tf",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for adurmus/resnet18-random | [
-0.017891190946102142,
-0.017293279990553856,
0.00045842916006222367,
0.0013799077132716775,
0.03576604276895523,
0.003981404937803745,
0.0015231857541948557,
0.010671314783394337,
-0.021889574825763702,
0.0478297658264637,
0.025523506104946136,
0.008762048557400703,
-0.02700875513255596,
... |
AnjanBiswas/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- quoref
model-index:
- name: distilbert-base-uncased_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_mod
This model is a fine-tuned version of [damapika/distilbert-base-uncased_mod](https://huggingface.co/damapika/distilbert-base-uncased_mod) on the quoref dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6873 | 1.0 | 1213 | 1.6969 |
| 1.1652 | 2.0 | 2426 | 1.8045 |
| 0.7953 | 3.0 | 3639 | 2.0147 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.011471164412796497,
-0.004382266663014889,
-0.031779710203409195,
0.051453519612550735,
0.04778299853205681,
0.01525360718369484,
-0.008975868113338947,
-0.035996828228235245,
-0.04646426811814308,
0.05384756252169609,
0.021417096257209778,
-0.03358869254589081,
0.00737642589956522,
0.0... |
Ann2020/rubert-base-cased-sentence-finetuned-ner_tags | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.96 +/- 5.85
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kraken2404/rl_course_vizdoom_health_gathering_supreme_v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .Users.brijesh.modasara.miniconda3.envs.rl_unit8_p310.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
-0.04264308139681816,
-0.004199343267828226,
0.013006981462240219,
0.039004094898700714,
0.024802466854453087,
-0.011798640713095665,
-0.00884766224771738,
-0.02683691494166851,
-0.038860984146595,
0.05611454322934151,
0.03665054216980934,
0.005966393277049065,
0.0174877792596817,
0.030700... |
Anomic/DialoGPT-medium-loki | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
---
This model has been republished and its ownership transferred to Civitai with the full permissions of the model creator. They have asked that all identifying information about them be removed from the model details.
The author modified the License.(22/2/Feb)
That is not usual "creativeml-openrail-m"
Check Permission and Liscense below (modified Dreamlike Liscense.)
When the author made samples,
His eta (see setting) was "0".
And ESND was "1"
Now,he canged them to eta to "0.67" and ESND "31337"(20/Feb/2023)
New merge model for 2.5D illustration here!!
https://civitai.com/models/9291/sunshinemix
Notes from the author:
IMPORTANT: First of all, I never suggest regenerate images of “real “person,but, photo "realistic" images.
I solemnly declare: In principle, this model is prohibited from being used for training style models based on portraits of celebrities and public figures, because it will cause controversy and have a negative impact on the development of the AI community. If you must violate the above statement to train the relevant model and release it publicly, please delete all descriptions related to this model in your release notes. Thank you for your support and understanding.
I appreciate you enjoy my model.
But, it might cause legal conflicts if you made some works/Loras/Embeddings named with actual person/copyright.
I beg you never to put them with my model.
If you can't be with actual name/copyright, just remove my model from your works.
I never suggest to regenerate actual persons/copyright with my models.
And I never want any legal conflicts from my models.
PLEASE!! Care about legal conscious and privacy!!
<EXPLANATION of this model>
Konnichiwa!!!!
・This is Merged "Basilmix"(nuigurumi/basil_mix · Hugging Face)
+ wonderful realistic models.
(PoV Skin Texture - r34 Lucid Black | Stable Diffusion Checkpoint | Civitai: https://civitai.com/models/4486/pov-skin-texture-r34-lucid-black
PoV Skin Texture - Dreamlike r34 | Stable Diffusion Checkpoint | Civitai: https://civitai.com/models/4481/pov-skin-texture-dreamlike-r34
by twilightBOO: https://civitai.com/user/twilightBOO)
Due to using Dreamlike Diffusion 1.0, this model has the following license:
Modified License(Dreamlike photoreal allowed me to modify Liscence.)
This model is licesed under a modified CreativeML OpenRAIL-M license.
- You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at contact@dreamlike.art
- You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less
(Edit:We sencerely ban to use the outputs of the model or the outputs of the model's derivatives(further murged models and Loras included) for commercial purposes. Because, it would cause controversy and have a negative impact on the development of the AI community, wheter images were generated from real person or not.)
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md | [
-0.028307823464274406,
-0.02080088108778,
-0.0024773008190095425,
0.02313806489109993,
0.04531547799706459,
0.015479391440749168,
-0.0024591097608208656,
-0.018189888447523117,
-0.029477039352059364,
0.06506245583295822,
0.04627399519085884,
-0.00744202034547925,
0.008724330924451351,
0.02... |
Anonymous/ReasonBERT-RoBERTa | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-cbp-lkg-qa-small-finetuned
Google's Flan T5 model ([flan-t5-small](https://huggingface.co/google/flan-t5-small)) finetuned over a cleaned version of the Legal Knowledge Graph using triples formulated as QA pairs.
| [
-0.004257984459400177,
-0.012233540415763855,
0.009933210909366608,
0.03744771331548691,
0.015537945553660393,
0.005507868714630604,
-0.014704388566315174,
0.029306545853614807,
-0.010755890980362892,
0.041229359805583954,
0.01790647953748703,
0.002032193588092923,
0.03184675797820091,
0.0... |
AnonymousSub/AR_EManuals-BERT | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
inference: false
---
**NOTE: This "delta model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
See https://github.com/lm-sys/FastChat#vicuna-weights for instructions.
<br>
<br>
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://vicuna.lmsys.org/
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
## Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries.
- Fix the supervised fine-tuning loss computation for better model quality. | [
-0.029803819954395294,
-0.015434031374752522,
0.003688773373141885,
0.02914821170270443,
0.040279287844896317,
0.015772856771945953,
0.00666181230917573,
0.012809240259230137,
0.005031853448599577,
0.04294076934456825,
0.04340023919939995,
0.0009843780426308513,
0.023762209340929985,
0.052... |
AnonymousSub/AR_rule_based_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="worsty/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| [
-0.01785065233707428,
-0.01754460483789444,
-0.008260502479970455,
0.03134674206376076,
0.0509130097925663,
-0.015900854021310806,
-0.011338915675878525,
-0.008606135845184326,
-0.06008068844676018,
0.05284722149372101,
-0.00172959896735847,
-0.0069190822541713715,
0.02524528093636036,
0.0... |
AnonymousSub/AR_rule_based_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-04-18T15:25:13Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="worsty/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| [
-0.019179560244083405,
-0.015002704225480556,
-0.008756120689213276,
0.02742106281220913,
0.04682239890098572,
0.0012010675854980946,
-0.01954379491508007,
0.007546926848590374,
-0.038112543523311615,
0.0529957078397274,
0.018521541729569435,
-0.0047239032573997974,
0.012921138666570187,
0... |
AnonymousSub/AR_rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-large_vaxxstance_spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_vaxxstance_spanish
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5186
- F1: 0.8285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 126 | 0.7648 | 0.6686 |
| No log | 2.0 | 252 | 0.5188 | 0.8127 |
| No log | 3.0 | 378 | 0.5417 | 0.7882 |
| 0.6762 | 4.0 | 504 | 0.4829 | 0.8285 |
| 0.6762 | 5.0 | 630 | 0.5186 | 0.8285 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.028591133654117584,
0.0008144532912410796,
0.019835419952869415,
0.03806499019265175,
0.047610726207494736,
0.009585542604327202,
-0.017151426523923874,
-0.019737184047698975,
-0.03016413189470768,
0.05418412759900093,
0.00423770397901535,
-0.04610837250947952,
0.009740208275616169,
0.0... |
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-cbp-lkg-qa-small
Google's Flan T5 model ([flan-t5-small](https://huggingface.co/google/flan-t5-small)) trained over a cleaned version of the Legal Knowledge Graph using triples formulated as QA pairs.
| [
-0.00618779519572854,
-0.014811482280492783,
0.008422203361988068,
0.047346293926239014,
0.018220946192741394,
0.009403370320796967,
-0.01686687581241131,
0.02560979500412941,
-0.009298088029026985,
0.04222123697400093,
0.010867336764931679,
0.002884229412302375,
0.022968336939811707,
0.01... |
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.9191245777780953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2272
- Accuracy: 0.919
- F1: 0.9191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8167 | 1.0 | 250 | 0.3223 | 0.9025 | 0.8991 |
| 0.2503 | 2.0 | 500 | 0.2272 | 0.919 | 0.9191 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
-0.009398508816957474,
0.010314851999282837,
-0.027989163994789124,
0.03663384169340134,
0.0614657923579216,
0.03224878013134003,
-0.022465795278549194,
-0.03630248084664345,
-0.033630724996328354,
0.05719142407178879,
0.018588298931717873,
-0.04510689154267311,
0.034704457968473434,
0.044... |
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pubmed-summarization
type: pubmed-summarization
config: section
split: validation
args: section
metrics:
- name: Rouge1
type: rouge
value: 14.1074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the pubmed-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3381
- Rouge1: 14.1074
- Rouge2: 5.3407
- Rougel: 11.9593
- Rougelsum: 12.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.0498 | 1.0 | 2500 | 2.4883 | 12.7167 | 5.1639 | 10.969 | 11.902 |
| 2.8737 | 2.0 | 5000 | 2.4022 | 13.812 | 5.1042 | 11.7056 | 12.6907 |
| 2.7603 | 3.0 | 7500 | 2.3895 | 13.6588 | 5.1146 | 11.6214 | 12.5331 |
| 2.6946 | 4.0 | 10000 | 2.3523 | 13.7167 | 5.2024 | 11.669 | 12.5419 |
| 2.6527 | 5.0 | 12500 | 2.3383 | 14.082 | 5.2787 | 11.9031 | 12.875 |
| 2.6303 | 6.0 | 15000 | 2.3381 | 14.1074 | 5.3407 | 11.9593 | 12.9286 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.018869398161768913,
-0.008383793756365776,
0.00843951664865017,
0.04340875893831253,
0.03915092349052429,
-0.0009026272455230355,
-0.026736527681350708,
-0.018580807372927666,
-0.04051491990685463,
0.054740387946367264,
0.03097081184387207,
-0.014982640743255615,
-0.004666089080274105,
... |
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Top-down-token-V1.0 Dreambooth model trained by Zapper with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| [
-0.03409319743514061,
-0.00744145642966032,
-0.027781222015619278,
0.031773004680871964,
0.028045637533068657,
0.00864984467625618,
-0.0001594903296791017,
0.006458070129156113,
-0.018341630697250366,
0.03876762092113495,
0.051762837916612625,
0.007012298796325922,
-0.02921648509800434,
0.... |
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-04-18T15:41:43Z | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- Walker2d-v3
benchmark_name: OpenAI/Gym/MuJoCo
task_name: Walker2d-v3
pipeline_tag: reinforcement-learning
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/MuJoCo-Walker2d-v3
type: OpenAI/Gym/MuJoCo-Walker2d-v3
metrics:
- type: mean_reward
value: 5115.65 +/- 19.18
name: mean_reward
---
# Play **Walker2d-v3** with **SAC** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **SAC** implementation to OpenAI/Gym/MuJoCo **Walker2d-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import SACAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py"))
# Instantiate the agent
agent = SACAgent(env="Walker2d", exp_name="Walker2d-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import SACAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Walker2d-v3-SAC")
# Instantiate the agent
agent = SACAgent(env="Walker2d", exp_name="Walker2d-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus.sac import SACAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = SACAgent(env="Walker2d", exp_name="Walker2d-v3-SAC")
# Train the agent
return_ = agent.train(step=int(5000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/MuJoCo",
task_name="Walker2d-v3",
algo_name="SAC",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html",
installation_guide='''
sudo apt update -y \
&& sudo apt install -y \
build-essential \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
libglfw3 \
libglfw3-dev \
libsdl2-dev \
libsdl2-image-dev \
libglm-dev \
libfreetype6-dev \
patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install DI-engine[common_env]
''',
usage_file_by_git_clone="./sac/walker2d_sac_deploy.py",
usage_file_by_huggingface_ding="./sac/walker2d_sac_download.py",
train_file="./sac/walker2d_sac.py",
repo_id="OpenDILabCommunity/Walker2d-v3-SAC"
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 6000,
'env_id': 'Walker2d-v3',
'norm_obs': {
'use_norm': False
},
'norm_reward': {
'use_norm': False
},
'collector_env_num': 1,
'evaluator_env_num': 8,
'n_evaluator_episode': 8
},
'policy': {
'model': {
'twin_critic': True,
'action_space': 'reparameterization',
'obs_shape': 17,
'action_shape': 6,
'actor_head_hidden_size': 256,
'critic_head_hidden_size': 256
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 1,
'batch_size': 256,
'learning_rate_q': 0.001,
'learning_rate_policy': 0.001,
'learning_rate_alpha': 0.0003,
'target_theta': 0.005,
'discount_factor': 0.99,
'alpha': 0.2,
'auto_alpha': False,
'log_space': True,
'target_entropy': None,
'ignore_done': False,
'init_w': 0.003,
'reparameterization': True
},
'collect': {
'collector': {},
'n_sample': 1,
'unroll_len': 1,
'collector_logit': False
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'cfg_type': 'InteractionSerialEvaluatorDict',
'n_episode': 8,
'stop_value': 6000
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 1000000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'sac',
'priority': False,
'priority_IS_weight': False,
'random_collect_size': 10000,
'transition_with_policy_data': True,
'multi_agent': False,
'cfg_type': 'SACPolicyDict',
'command': {}
},
'exp_name': 'Walker2d-v3-SAC',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zhangpaipai/Walker2d-v3-SAC)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-SAC/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-SAC/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 851.05 KB
- **Last Update Date:** 2023-04-18
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/MuJoCo
- **Task:** Walker2d-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.7
- **PyTorch version:** 1.7.1
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
| [
-0.04313744604587555,
0.00823171529918909,
0.0014423703541979194,
0.019697507843375206,
0.040509965270757675,
0.015736157074570656,
0.003297232324257493,
-0.007354902569204569,
-0.029350141063332558,
0.06495225429534912,
0.026196040213108063,
0.0031617239583283663,
0.0029521756805479527,
0... |
AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 70.40 +/- 32.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| [
-0.031326524913311005,
0.017405904829502106,
0.0023898379877209663,
0.009987362660467625,
0.04502401500940323,
-0.018528277054429054,
-0.02069713920354843,
-0.01862248219549656,
-0.029471158981323242,
0.08235114067792892,
0.017051365226507187,
-0.009941504336893559,
0.013459127396345139,
0... |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: rl-algo-impls
tags:
- MicrortsDefeatCoacAIShaped-v3
- ppo
- deep-reinforcement-learning
- reinforcement-learning
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 0.69 +/- 0.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MicrortsDefeatCoacAIShaped-v3
type: MicrortsDefeatCoacAIShaped-v3
---
# **PPO** Agent playing **MicrortsDefeatCoacAIShaped-v3**
This is a trained model of a **PPO** agent playing **MicrortsDefeatCoacAIShaped-v3** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/lf7j0hrv.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [4706d8d](https://github.com/sgoodfriend/rl-algo-impls/tree/4706d8dbb99b38e70d080c3de68d0751ea585a2f). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:------------------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | MicrortsDefeatCoacAIShaped-v3 | 1 | 0.461538 | 0.88712 | 26 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/arhv1foe) |
| ppo | MicrortsDefeatCoacAIShaped-v3 | 2 | 0.461538 | 0.84265 | 26 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/kd89zf31) |
| ppo | MicrortsDefeatCoacAIShaped-v3 | 3 | 0.692308 | 0.721602 | 26 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/1ak14nj4) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[4706d8d](https://github.com/sgoodfriend/rl-algo-impls/tree/4706d8dbb99b38e70d080c3de68d0751ea585a2f).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/1ak14nj4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [4706d8d](https://github.com/sgoodfriend/rl-algo-impls/tree/4706d8dbb99b38e70d080c3de68d0751ea585a2f). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env MicrortsDefeatCoacAIShaped-v3 --seed 3
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/lf7j0hrv were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone git@github.com:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh [-a {"ppo a2c dqn vpg"}] [-e ENVS] [-j {6}] [-p {rl-algo-impls-benchmarks}] [-s {"1 2 3"}]
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
additional_keys_to_log:
- microrts_stats
algo: ppo
algo_hyperparams:
batch_size: 3072
clip_range: 0.1
clip_range_decay: none
clip_range_vf: 0.1
ent_coef: 0.01
learning_rate: 0.00025
learning_rate_decay: spike
max_grad_norm: 0.5
n_epochs: 4
n_steps: 512
ppo2_vf_coef_halving: true
vf_coef: 0.5
device: auto
env: Microrts-selfplay-unet
env_hyperparams:
env_type: microrts
make_kwargs:
map_paths:
- maps/16x16/basesWorkers16x16.xml
max_steps: 2000
num_selfplay_envs: 36
render_theme: 2
reward_weight:
- 10
- 1
- 1
- 0.2
- 1
- 4
n_envs: 24
self_play_kwargs:
num_old_policies: 12
save_steps: 200000
swap_steps: 10000
swap_window_size: 4
window: 25
env_id: MicrortsDefeatCoacAIShaped-v3
eval_hyperparams:
deterministic: false
env_overrides:
bots:
coacAI: 2
droplet: 2
guidedRojoA3N: 2
izanagi: 2
lightRushAI: 2
mixedBot: 2
naiveMCTSAI: 2
passiveAI: 2
randomAI: 2
randomBiasedAI: 2
rojo: 2
tiamat: 2
workerRushAI: 2
make_kwargs:
map_paths:
- maps/16x16/basesWorkers16x16.xml
max_steps: 4000
num_selfplay_envs: 0
render_theme: 2
reward_weight:
- 1
- 0
- 0
- 0
- 0
- 0
n_envs: 26
self_play_kwargs: {}
max_video_length: 4000
n_episodes: 26
score_function: mean
step_freq: 1000000
microrts_reward_decay_callback: false
n_timesteps: 300000000
policy_hyperparams:
activation_fn: relu
actor_head_style: unet
cnn_flatten_dim: 256
cnn_style: microrts
v_hidden_sizes:
- 256
- 128
seed: 3
use_deterministic_algorithms: true
wandb_entity: null
wandb_group: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_4706d8d
- host_192-9-146-21
- branch_selfplay
- v0.0.9
```
| [
-0.022029735147953033,
-0.004600361455231905,
-0.010453621856868267,
0.024475648999214172,
0.044148143380880356,
0.0024264350067824125,
-0.012173646129667759,
-0.02144618146121502,
-0.02688356302678585,
0.04534914717078209,
0.011375712230801582,
-0.019399745389819145,
-0.022549746558070183,
... |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/relbert-roberta-base-triplet-semeval2012
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8459126984126984
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5454545454545454
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5459940652818991
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.725958866036687
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.864
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5526315789473685
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5856481481481481
- task:
name: Analogy Questions (ConceptNet Analogy)
type: multiple-choice-qa
dataset:
name: ConceptNet Analogy
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29446308724832215
- task:
name: Analogy Questions (TREX Analogy)
type: multiple-choice-qa
dataset:
name: TREX Analogy
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.453551912568306
- task:
name: Analogy Questions (NELL-ONE Analogy)
type: multiple-choice-qa
dataset:
name: NELL-ONE Analogy
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7066666666666667
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8872984782281151
- name: F1 (macro)
type: f1_macro
value: 0.8810190171103657
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8046948356807512
- name: F1 (macro)
type: f1_macro
value: 0.5209759868344143
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6771397616468039
- name: F1 (macro)
type: f1_macro
value: 0.6620361634733358
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.93107045976212
- name: F1 (macro)
type: f1_macro
value: 0.8387124439500746
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.902851770604826
- name: F1 (macro)
type: f1_macro
value: 0.90279695761752
---
# relbert/relbert-roberta-base-triplet-semeval2012
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-triplet-semeval2012/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.5454545454545454
- Accuracy on SAT: 0.5459940652818991
- Accuracy on BATS: 0.725958866036687
- Accuracy on U2: 0.5526315789473685
- Accuracy on U4: 0.5856481481481481
- Accuracy on Google: 0.864
- Accuracy on ConceptNet Analogy: 0.29446308724832215
- Accuracy on T-Rex Analogy: 0.453551912568306
- Accuracy on NELL-ONE Analogy: 0.7066666666666667
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-triplet-semeval2012/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8872984782281151
- Micro F1 score on CogALexV: 0.8046948356807512
- Micro F1 score on EVALution: 0.6771397616468039
- Micro F1 score on K&H+N: 0.93107045976212
- Micro F1 score on ROOT09: 0.902851770604826
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-triplet-semeval2012/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8459126984126984
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-triplet-semeval2012")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-base
- max_length: 64
- epoch: 1
- batch: 79
- random_seed: 0
- lr: 2e-05
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: triplet
- classification_loss: False
- loss_function_config: {'mse_margin': 1}
- augment_negative_by_positive: False
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-triplet-semeval2012/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
| [
0.0041767917573452,
-0.012378653511404991,
-0.0276162289083004,
0.05425935983657837,
0.04272815212607384,
0.03271901607513428,
-0.033201199024915695,
-0.008178330957889557,
-0.0673501193523407,
0.02942962385714054,
0.016202401369810104,
0.0016416915459558368,
0.017116138711571693,
0.027175... |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236843302640881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8329 | 1.0 | 250 | 0.3142 | 0.9085 | 0.9057 |
| 0.2503 | 2.0 | 500 | 0.2170 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
-0.01026198547333479,
0.010183059610426426,
-0.027982588857412338,
0.03618389368057251,
0.06084326654672623,
0.031966157257556915,
-0.022935474291443825,
-0.0362207405269146,
-0.033023495227098465,
0.057130780071020126,
0.017688408493995667,
-0.04544918239116669,
0.03445769473910332,
0.044... |
AnonymousSub/EManuals_RoBERTa_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- WilliamWen/autotrain-data-ni_io_03
co2_eq_emissions:
emissions: 0.002014133815227475
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 50535120664
- CO2 Emissions (in grams): 0.0020
## Validation Metrics
- Loss: 0.027
- Accuracy: 0.991
- Precision: 0.908
- Recall: 0.908
- F1: 0.908
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/WilliamWen/autotrain-ni_io_03-50535120664
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("WilliamWen/autotrain-ni_io_03-50535120664", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("WilliamWen/ni_io_03", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | [
-0.029374998062849045,
-0.024500137194991112,
-0.004718688316643238,
0.025184674188494682,
0.0409025140106678,
0.03606254607439041,
-0.03589503467082977,
-0.010678716003894806,
-0.044801224023103714,
0.07735659182071686,
0.025158200412988663,
0.02244551293551922,
-0.004699631594121456,
0.0... |
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-04-18T18:16:22Z | ---
license: apache-2.0
---
Ggml conversion for the mix of the original models by KoboldAI. For use with KoboldCPP.
Mixed model by digitous: https://huggingface.co/digitous/Adventien-GPTJ
Original models
Adventure: https://huggingface.co/KoboldAI/GPT-J-6B-Adventure
Skien: https://huggingface.co/KoboldAI/GPT-J-6B-Skein | [
-0.0764918103814125,
-0.012829911895096302,
-0.025561736896634102,
0.03489391878247261,
0.07959108054637909,
0.011241868138313293,
0.005779619328677654,
0.0029082384426146746,
-0.020478324964642525,
0.0629875510931015,
0.040896687656641006,
0.007770884316414595,
-0.008902215398848057,
0.02... |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | Access to model AIMH/mental-xlnet-base-cased is restricted and you are not in the authorized list. Visit https://huggingface.co/AIMH/mental-xlnet-base-cased to ask for access. | [
-0.06483475863933563,
0.003526885062456131,
-0.014214673079550266,
0.013813807629048824,
0.016330979764461517,
0.008835681714117527,
0.005292294081300497,
-0.01113167218863964,
-0.033447265625,
0.02987912856042385,
0.04164959490299225,
-0.007512688171118498,
0.005590858403593302,
0.0329217... |
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- suatatan/autotrain-data-red-arrow-finder
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.32918055904446797
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 50583120813
- CO2 Emissions (in grams): 0.3292
## Validation Metrics
- Loss: 0.602
- Accuracy: 0.750
- Precision: 0.812
- Recall: 0.812
- AUC: 0.793
- F1: 0.812 | [
-0.011481423862278461,
-0.015802884474396706,
0.019948242232203484,
0.04779529571533203,
0.04771794378757477,
-0.006019174586981535,
-0.01727604679763317,
0.0008876320207491517,
-0.03605613857507706,
0.06209442391991615,
-0.003042643191292882,
0.0025812266394495964,
0.0021254236344248056,
... |
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- doc_lay_net
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Layoutlmv3-finetuned-DocLayNet-test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: doc_lay_net
type: doc_lay_net
config: DocLayNet_2022.08_processed_on_2023.01
split: test
args: DocLayNet_2022.08_processed_on_2023.01
metrics:
- name: Precision
type: precision
value: 0.6563380281690141
- name: Recall
type: recall
value: 0.6743849493487699
- name: F1
type: f1
value: 0.6652391149179158
- name: Accuracy
type: accuracy
value: 0.9017296604740551
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Layoutlmv3-finetuned-DocLayNet-test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the doc_lay_net dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4297
- Precision: 0.6563
- Recall: 0.6744
- F1: 0.6652
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.5045 | 0.37 | 250 | 0.7837 | 0.2918 | 0.4304 | 0.3478 | 0.7970 |
| 0.8283 | 0.73 | 500 | 0.5559 | 0.5013 | 0.5918 | 0.5428 | 0.8722 |
| 0.517 | 1.1 | 750 | 0.7558 | 0.5196 | 0.5886 | 0.5519 | 0.8059 |
| 0.4416 | 1.46 | 1000 | 0.5127 | 0.4434 | 0.6203 | 0.5172 | 0.8751 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| [
0.0029113052878528833,
-0.007095618173480034,
-0.009106168523430824,
0.022537099197506905,
0.04054447263479233,
0.017658451572060585,
-0.027769798412919044,
-0.017344648018479347,
-0.02532360516488552,
0.051968175917863846,
0.026901880279183388,
-0.029478101059794426,
0.01066550798714161,
... |
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-04-18T19:21:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gor1/my_awesome_model_tweets_A2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gor1/my_awesome_model_tweets_A2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1092
- Validation Loss: 0.0892
- Train Accuracy: 1.0
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 80, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6905 | 0.6443 | 0.6692 | 0 |
| 0.6335 | 0.6095 | 0.6692 | 1 |
| 0.6049 | 0.5792 | 0.6692 | 2 |
| 0.5833 | 0.5375 | 0.6692 | 3 |
| 0.5253 | 0.4871 | 0.7846 | 4 |
| 0.4834 | 0.4300 | 0.8308 | 5 |
| 0.4283 | 0.3692 | 0.8769 | 6 |
| 0.3737 | 0.3106 | 0.9154 | 7 |
| 0.3157 | 0.2593 | 0.9615 | 8 |
| 0.2583 | 0.2084 | 0.9692 | 9 |
| 0.2192 | 0.1693 | 0.9769 | 10 |
| 0.1775 | 0.1398 | 1.0 | 11 |
| 0.1464 | 0.1172 | 1.0 | 12 |
| 0.1310 | 0.1009 | 1.0 | 13 |
| 0.1092 | 0.0892 | 1.0 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.03250068798661232,
-0.028502875939011574,
-0.013909169472754002,
0.03093687631189823,
0.04221782460808754,
0.022084716707468033,
-0.00998349767178297,
-0.02169206365942955,
-0.03311267867684364,
0.0493755079805851,
0.034622181206941605,
-0.013005915097892284,
-0.0021506119519472122,
0.0... |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Telugu_movie_review_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Telugu_movie_review_sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3627
- Accuracy: 0.8814
- F1: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.03042237088084221,
0.0023007814306765795,
-0.014784589409828186,
0.04173119738698006,
0.04631495848298073,
0.045791421085596085,
-0.028803745284676552,
-0.008451788686215878,
-0.041410114616155624,
0.06501299887895584,
0.0473644956946373,
-0.04451265186071396,
0.014211802743375301,
0.04... |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 975.54 +/- 73.77
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04556029662489891,
-0.0010226145386695862,
-0.022041192278265953,
0.032290980219841,
0.04382120817899704,
0.017908448353409767,
-0.018288632854819298,
-0.030179515480995178,
-0.036837752908468246,
0.06933046132326126,
0.02190237119793892,
0.002690535271540284,
0.01575271040201187,
0.028... |
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: stable-baselines3
tags:
- RoombaAToB-from-behavior-cloning-long
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-from-behavior-cloning-long
type: RoombaAToB-from-behavior-cloning-long
metrics:
- type: mean_reward
value: -31.26 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-from-behavior-cloning-long**
This is a trained model of a **BC** agent playing **RoombaAToB-from-behavior-cloning-long**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.02524208091199398,
-0.004532364662736654,
-0.021683309227228165,
0.043032802641391754,
0.0438060462474823,
0.015849001705646515,
-0.023626670241355896,
-0.011508943513035774,
-0.04569012299180031,
0.06196420267224312,
0.019121481105685234,
-0.009989277459681034,
0.00713589321821928,
0.0... |
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GPTCodeDetection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPTCodeDetection
THIS MODEL IS NOT FINISHED.
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 29 | 0.2387 | 0.8909 |
| No log | 1.98 | 59 | 0.0015 | 1.0 |
| No log | 2.92 | 87 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.029136445373296738,
-0.011386564001441002,
-0.010171265341341496,
0.03346223011612892,
0.029292430728673935,
0.023422634229063988,
-0.0005827406421303749,
0.003949059173464775,
-0.030769992619752884,
0.048441022634506226,
0.022994335740804672,
-0.02356739155948162,
-0.006128034554421902,
... |
AnonymousSub/specter-bert-model_copy_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2023-04-18T20:43:33Z | ---
license: mit
language:
- de
inference: false
---
# snip-igel-500-v2-adapter-merged
<!-- Provide a quick summary of what the model is/does. -->
snip-igel-500-v2-adapter-merged
Version 1.0 / 18 April 2023
Model and Adapters merged for snip-igel-500-v2.
See [snip-igel-500-v2](https://huggingface.co/snipaid/snip-igel-500-v2) for the full model description. | [
-0.029483625665307045,
-0.018088586628437042,
-0.011798235587775707,
0.0012586681405082345,
0.055378176271915436,
0.009411128237843513,
-0.037393972277641296,
0.0026061951648443937,
-0.05289788544178009,
0.06041843071579933,
0.047383200377225876,
-0.0020789646077901125,
0.04215243458747864,
... |
AnonymousSub/specter-bert-model_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2023-04-18T20:46:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.8.0
- Tokenizers 0.11.0
| [
-0.019635489210486412,
-0.013578717596828938,
-0.021889572963118553,
0.04252355545759201,
0.04869015887379646,
0.028975872322916985,
-0.03766356036067009,
0.012220287695527077,
-0.02372315712273121,
0.03705385699868202,
0.03590410575270653,
-0.004693009424954653,
0.027751516550779343,
0.03... |
AnonymousSub/unsup-consert-base | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.80 +/- 22.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| [
-0.03984691947698593,
0.015720678493380547,
0.01442210003733635,
0.01738467626273632,
0.04861755296587944,
-0.013848274946212769,
-0.01973387971520424,
-0.023705190047621727,
-0.017929682508111,
0.06755047291517258,
0.035217273980379105,
-0.007871548645198345,
0.011270062997937202,
-0.0069... |
AnonymousSub/unsup-consert-base_copy | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-04-18T20:52:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_Collision_v2.1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_Collision_v2.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4061
- Validation Loss: 1.0763
- Train Accuracy: 0.6923
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3050, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.4061 | 1.0763 | 0.6923 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.020321277901530266,
-0.009862074628472328,
-0.014123530127108097,
0.02642613649368286,
0.02169831469655037,
0.01075571496039629,
-0.006805194541811943,
-0.019781840965151787,
-0.05155344307422638,
0.05732734873890877,
0.029028942808508873,
-0.020318135619163513,
0.030037079006433487,
0.... |
Anubhav23/IndianlegalBert | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-18T21:26:04Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Yanrds/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.05355910211801529,
0.001851151930168271,
-0.004639741964638233,
0.05201002210378647,
0.02505696378648281,
0.030821364372968674,
-0.01091103907674551,
-0.023868419229984283,
-0.0006631740834563971,
0.05027369409799576,
0.025914523750543594,
-0.015187988989055157,
0.007430329453200102,
0.... |
Anupam/QuestionClassifier | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.007389966398477554,
0.002044172491878271,
-0.025711413472890854,
0.03988448530435562,
0.04682781547307968,
0.01435170229524374,
-0.03221778944134712,
-0.024170799180865288,
-0.028361748903989792,
0.054472070187330246,
0.005332177970558405,
-0.013839791528880596,
0.019043058156967163,
0.... |
gaurishhs/API | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-18T21:34:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/cleaned-e5-base-unsupervised-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/cleaned-e5-base-unsupervised-test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/cleaned-e5-base-unsupervised-test')
model = AutoModel.from_pretrained('rithwik-db/cleaned-e5-base-unsupervised-test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/cleaned-e5-base-unsupervised-test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 298 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | [
-0.03398760035634041,
-0.022591425105929375,
-0.0151683883741498,
0.046900443732738495,
0.009411766193807125,
0.044224075973033905,
-0.022125983610749245,
-0.004835634957998991,
-0.07226567715406418,
0.0833253562450409,
0.03764093294739723,
0.015461000613868237,
-0.0015769570600241423,
0.0... |
Apisate/Discord-Ai-Bot | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | Access to model sobanerjee/ddpm-butterflies-128 is restricted and you are not in the authorized list. Visit https://huggingface.co/sobanerjee/ddpm-butterflies-128 to ask for access. | [
-0.04405621811747551,
0.008520974777638912,
0.00023276740103028715,
-0.0015622134087607265,
0.04138193652033806,
0.00698855658993125,
0.012297802604734898,
0.001120631000958383,
-0.033345527946949005,
0.045769188553094864,
0.05448164790868759,
-0.031193846836686134,
0.022181719541549683,
0... |
Apoorva/k2t-test | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"keytotext",
"k2t",
"Keywords to Sentences",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 7 | null | Access to model Kalslice/autotrain-fakefind-50620120864 is restricted and you are not in the authorized list. Visit https://huggingface.co/Kalslice/autotrain-fakefind-50620120864 to ask for access. | [
-0.049072109162807465,
-0.0036365720443427563,
-0.0017463022377341986,
0.018607769161462784,
0.04237030819058418,
0.03479403257369995,
-0.004314641933888197,
0.00457015773281455,
-0.027217859402298927,
0.050752464681863785,
0.037024836987257004,
0.0016955157043412328,
-0.0020424951799213886,... |
ArBert/albert-base-v2-finetuned-ner-agglo | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: cc-by-nc-4.0
language:
- zh
tags:
- art
- legal
---
# 教廷第一驅魔人線上看
哪裡可以《教廷第一驅魔人》免費線上看?教廷第一驅魔人線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《教廷第一驅魔人》線上看、完整版小鴨 2023,(電影)教廷第一驅魔人線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 教廷第一驅魔人線上看、電影下載片免費:
➤[https://super4kuhdq.com/zh/movie/758323](https://super4kuhdq.com/zh/movie/758323)
●●可供下載,(教廷第一驅魔人 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,教廷第一驅魔人線上看完整版、教廷第一驅魔人線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[The Pope's Exorcist 2023]電影。線上看電影《教廷第一驅魔人》的完整版。
## 《教廷第一驅魔人》台湾上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
教宗直接任命的梵蒂冈首席「驱魔人」加布里埃·阿莫尔特神父(罗素·克劳 饰)从事驱魔工作36年,曾主持超过10万宗驱魔仪式,凭借信念赋予的勇气肩负着对抗邪魔的重任!这次,当他为一名被邪灵附体的男童进行驱魔时,竟意外发现了天主教教会一段隐藏的黑暗历史!他必须找出真相,才能拯救男童一家,击退威胁教廷的恶魔!
发布日期: 2023-04-05
运行时间: 104 分钟
类型: 恐怖, 悬疑
## 至于如何在没有广告的情况下免費線上看《教廷第一驅魔人》?
在这里你可以《教廷第一驅魔人》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《教廷第一驅魔人》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《教廷第一驅魔人 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
教廷第一驅魔人
教廷第一驅魔人線上看
教廷第一驅魔人線上看小鴨
教廷第一驅魔人免費線上看
教廷第一驅魔人線上看
教廷第一驅魔人2023電影
教廷第一驅魔人線上看完整版
教廷第一驅魔人台灣上映
教廷第一驅魔人台灣上映時間 | [
-0.016693655401468277,
-0.018727323040366173,
-0.008985253050923347,
0.026214538142085075,
0.05499435216188431,
0.005126792471855879,
-0.019140422344207764,
0.014914232306182384,
-0.02584424428641796,
0.0495639331638813,
0.02561960741877556,
0.001371506368741393,
0.02793014980852604,
0.046... |
ArBert/albert-base-v2-finetuned-ner-gmm | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cybersecurity_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cybersecurity_ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- Precision: 0.6115
- Recall: 0.6154
- F1: 0.6134
- Accuracy: 0.9657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 176 | 0.1601 | 0.4801 | 0.4776 | 0.4788 | 0.9553 |
| No log | 2.0 | 352 | 0.1371 | 0.5934 | 0.5737 | 0.5834 | 0.9612 |
| 0.1455 | 3.0 | 528 | 0.1320 | 0.5702 | 0.6207 | 0.5944 | 0.9620 |
| 0.1455 | 4.0 | 704 | 0.1343 | 0.6015 | 0.6175 | 0.6094 | 0.9646 |
| 0.1455 | 5.0 | 880 | 0.1383 | 0.6115 | 0.6154 | 0.6134 | 0.9657 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.033950917422771454,
-0.00800742581486702,
-0.003698530374094844,
0.018239494413137436,
0.033668603748083115,
0.012501953169703484,
-0.01041440386325121,
-0.007837248034775257,
-0.04825583100318909,
0.06730876863002777,
0.041022833436727524,
-0.021280447021126747,
0.008887792937457561,
0... |
ArBert/bert-base-uncased-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-large-da-multiwoz2.0_400-ep20-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.0_400-ep20-nonstop
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3661
- Accuracy: 41.2421
- Num: 7358
- Gen Len: 15.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.1824 | 1.16 | 200 | 0.5187 | 28.4524 | 7358 | 14.7642 |
| 0.5471 | 2.33 | 400 | 0.4278 | 32.5629 | 7358 | 15.4386 |
| 0.4647 | 3.49 | 600 | 0.4029 | 35.2443 | 7358 | 16.135 |
| 0.4313 | 4.65 | 800 | 0.3820 | 36.6479 | 7358 | 16.1552 |
| 0.4074 | 5.81 | 1000 | 0.3775 | 37.6957 | 7358 | 15.1439 |
| 0.3859 | 6.98 | 1200 | 0.3690 | 38.3142 | 7358 | 15.2045 |
| 0.369 | 8.14 | 1400 | 0.3720 | 39.8799 | 7358 | 15.7923 |
| 0.3547 | 9.3 | 1600 | 0.3665 | 39.5217 | 7358 | 15.3394 |
| 0.3457 | 10.47 | 1800 | 0.3632 | 39.8289 | 7358 | 15.4761 |
| 0.3423 | 11.63 | 2000 | 0.3678 | 39.9509 | 7358 | 15.6708 |
| 0.3295 | 12.79 | 2200 | 0.3657 | 41.1373 | 7358 | 15.1586 |
| 0.3212 | 13.95 | 2400 | 0.3651 | 40.8611 | 7358 | 15.7312 |
| 0.3128 | 15.12 | 2600 | 0.3664 | 40.8806 | 7358 | 15.4553 |
| 0.3131 | 16.28 | 2800 | 0.3677 | 40.8906 | 7358 | 15.4629 |
| 0.3093 | 17.44 | 3000 | 0.3661 | 40.9971 | 7358 | 15.4329 |
| 0.3021 | 18.6 | 3200 | 0.3652 | 41.2953 | 7358 | 15.5118 |
| 0.3004 | 19.77 | 3400 | 0.3661 | 41.2492 | 7358 | 15.5246 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
| [
-0.04227500781416893,
-0.014445922337472439,
-0.0008321181521750987,
0.04423888027667999,
0.034729715436697006,
-0.01497739739716053,
-0.015707137063145638,
-0.02326519414782524,
-0.015243313275277615,
0.035660505294799805,
0.026102012023329735,
-0.01767886057496071,
0.0004940041108056903,
... |
Aracatto/Catto | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | <h1 align="center" >Remove Objects Server</h1>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li><a href="#about-the-project">About</a></li>
<li><a href="#built-with">Installation</a></li>
<li><a href="#usage">Usage</a></li>
<li><a href="#license">License</a></li>
</ol>
</details>
## About
This is a Python project for removing unwanted objects from images using the inpainting technique. It includes a server implemented with FastAPI and an endpoint for processing images by applying inpainting techniques. This project uses a deep learning library, PyTorch, for training and testing the inpainting model.
<p align="center">
<img src="lama_cleaner_video.gif" />
</p>
## Installation
To install this project, you should first create a virtual environment using the following commands:
```bash
python3 -m venv venv
source venv/bin/activate
```
After creating the virtual environment, you can install the required libraries using pip:
```bash
pip install -r requirements.txt
```
## Usage
To use this project, first start the server by running main.py:
```bash
python main.py
```
After the server has started, you can test following endpoints:
- `http://{localhost}:{port}/lama/paint`
- This endpoint accepts an image file in the `file` parameter and applies inpainting techniques to remove unwanted objects.
- `http://{localhost}:{port}/mask`
- Mask endpoint is used to apply a mask to an image. The route accepts `img` and `mask` as input parameters. Then, it applies a mask on an image.
- You can use `testX.png` image and `testX_mask.png` mask in image folder for testing.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Other command
```bash
docker build -t zest .
```
| [
0.01217200979590416,
-0.03298013657331467,
-0.020018352195620537,
0.02430638298392296,
0.04047507420182228,
0.009395988658070564,
0.0016552177257835865,
-0.007576760370284319,
-0.006656082347035408,
0.06066890433430672,
0.019614920020103455,
0.004060222301632166,
0.03573622927069664,
0.058... |
Araf/Ummah | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
language:
- zh
tags:
- art
- legal
---
# 兒子可否不要走線上看
哪裡可以《兒子可否不要走》免費線上看?兒子可否不要走線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《兒子可否不要走》線上看、完整版小鴨 2023,(電影)兒子可否不要走線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 兒子可否不要走線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/806368)
➤[https://super4kuhdq.com/zh/movie/806368](https://super4kuhdq.com/zh/movie/806368)
●●可供下載,(兒子可否不要走 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,兒子可否不要走線上看完整版、兒子可否不要走線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[The Son 2023]電影。線上看電影《兒子可否不要走》的完整版。
## 《兒子可否不要走》台湾上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
讲述彼得(杰克曼)和新的伴侣及刚出生不久的孩子过着忙碌的生活,当前妻凯特(邓恩)带着他们处于青春期、充满困扰和愤怒的儿子尼古拉斯出现后,一切变得混乱不堪。彼得努力成为一个更好的父亲,用亲密的家庭幸福时刻来帮助儿子,但尼古拉斯的状态让这个家庭走上了一条危险的道路,他们必须尽一切努力来维系团结一致的纽带。
发布日期: 2022-11-10
运行时间: 123 分钟
类型: 剧情
## 至于如何在没有广告的情况下免費線上看《兒子可否不要走》?
在这里你可以《兒子可否不要走》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《兒子可否不要走》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《兒子可否不要走 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
兒子可否不要走
兒子可否不要走線上看
兒子可否不要走線上看小鴨
兒子可否不要走免費線上看
兒子可否不要走線上看
兒子可否不要走2023電影
兒子可否不要走線上看完整版
兒子可否不要走台灣上映
兒子可否不要走台灣上映時間 | [
-0.01981986127793789,
-0.02410382404923439,
-0.01311712060123682,
0.03487200662493706,
0.04239307716488838,
0.008368772454559803,
-0.023600852116942406,
0.0002241999318357557,
-0.031586602330207825,
0.04555371776223183,
0.027905583381652832,
-0.008421170525252819,
0.0404987558722496,
0.034... |
AragornII/DialoGPT-small-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- xnli_bn
model-index:
- name: Further_fine_tuning_E9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Further_fine_tuning_E9
This model is a fine-tuned version of [rafsankabir/Pretrained_E10](https://huggingface.co/rafsankabir/Pretrained_E10) on the xnli_bn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
-0.028607822954654694,
0.000071443006163463,
0.009879527613520622,
0.032120462507009506,
0.020184744149446487,
0.02104962058365345,
-0.023183589801192284,
-0.01546401996165514,
-0.03115321882069111,
0.03781411796808243,
0.0453830324113369,
-0.016036899760365486,
0.023465394973754883,
0.023... |
ArashEsk95/bert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
---
**♫ Discord: https://discord.gg/aihub | Join the community, learn to make models, chat with link-minded people and lets create music ♩ ♪**
**♫ Discord Latino: https://discord.gg/Crfqs7uB5V | Entren a nuestra comunidad, aprendan a crear modelos AI, habla con otros sobre musica y disfruta las notas musicales ♩ ♪**
**IMPORTANT!!!!!!!!!: VOICES CANNOT BE COPYRIGHTED. We do not promote piracy so please do not come in with that. We do promote legal-length sample clips of vocals. We promote music & AI produced music covers (impressions). We promote datasets. We promote machine learning & Voice AI Models.**
**If you want your credits/name removed, please open a ticket on the page and I will remove it diligently.**
**Tools: https://vocalremover.org/ https://x-minus.pro/ai https://create.musicfy.lol/**
**Created Using: SoftVC VITS Singing Voice Conversion (so vits svc 4.0) | Retrieval based Voice Conversion (RVC)**
**Name - Amount of Steps - Creator**
21 Savage - 100k - brandy#4247 |
21 Savage - 50k - candy#6483
2Pac Tupac - 50k - Makaveli AI#4517 |
2Pac Tupac (RVC) - 150 Epoch - Makaveli AI#4517 |
2Pac Tupac - 33k - ????
6lack (RVC) - 700 Epoch - RomeTheDaddy#4293
Aaliyah - 33.6k - COMEHU#2094
Aitana - 75K - blaise#9999
Alizee - 45.6k - CrimsonZockt#2221 |
Alizee (2000-2003) - 23.2k - CrimsonZockt#2221
Amano Pikamee (VOMS Project) - 30k - dacoolkid44#4173
Ameer Vann - 15k - asher roth#3637
Amelia Watson (Hololive EN) - 30k - dacoolkid44#4173
Andrew Tate - 50k - Makaveli AI#4517
Ant Clemons (RVC - 3150 Steps - SamV1sion#5354
Anthony Green (Circa Survive) (RVC) - 500 Epochs - owl#1313 |
Anthony Green (RVC) (Alpha) - 250 Epoch - philo#9160
Anuel AA - 41.6k - Smile WRLD#9877 |
Anuel AA (2016 Era) - 500 Steps - Raaul10#2946
Ariana Grande - 73k - ????? - [Trained using pro tools sessions so the vocals sound dry] |
Ariana Grande - 89k - christy#0059 |
Ariana Grande (RVC) - 4k Epoch 28k Steps - MentosAndRice#8492
Aries of Wunderworld - 150k - lij#0001
ASAP Rocky (RVC) - 1k Epoch - Ski#5447
Ayesha Erotica - 100k - henry_#7065
Baby Keem - 191k - okcool#5237
Bad Bunny - 180k - Bowl#2016 |
Bad Bunny - 1k Epoch - CJPP270#0162
BANANIROU - 100k - ştar#7068
Bart Simpson - 22k - AnthonyFandom70100#9529 |
Bart Simpson (RVC) - 250 Epoch - AnthonyFandom70100#9529
BENEE - 8k - rejekts#0820
Biden - 20k - Nardicality
Biggie Smalls - 112.8k - justinjohn-03#4897 |
Biggie Smalls (RVC) - 20k - Makaveli AI#4517
Billie Eilish - 8k - Vali665#9670 [7 Hours of Training] |
Billie Eilish 2016-2018 - 1k - Vali665#9670 |
Billie Eilish (RVC) - ???? - senzo#1502
Billie Joe - 24k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Binyamin Netanyahu (Israel's PM) - 67.7K - yeatfan119#8009
Bktherula - 47k - averycj#3997
Bo Burnham (Inside) (RVC) - 250 Epoch - analogspiderweb#7099
BONES - 1k Epoch 110k - 💊 Lüh Minion 💉#1804
Brandy (RVC) - 200 Epoch - fractalfantasy#2748
Brendon Urie - Panic! at the Disco - 49k - Budman#5216 & Bowl#2016
Brian Wilson (Modern Era) (RVC) - 200 Epoch - Jay#0152
Britney Spears - 100k - AIVERSE#5393 |
Britney Speaks (Young) - 17k - Frix#2580 |
Britney Spears (RVC) - 500 Epoch - AIVERSE#5393
Bruno Mars - 124.9k - Thompson#2472 |
Bruno Mars (RVC) - 24k - Thompson#2472
Bruno Powroznik (RVC) - 250 Epochs - analogspiderweb#7099
Bryska - 45.6k - CrimsonZockt#2221
Camila Cabello (RVC) - 600 Epoch - LMAO DEAD 😂😂😂#8206
Canserbero - 67k - Frix#2580
Caparezza - 200K - LollenApe#4707
Cazzu - 62k - NuokiFTW#0001
Chano (From Tan Biónica) - 24k - StarBoy#2512
Charlie Dompler (Smiling Friends) (RVC) - 300 Epoch - analogspiderweb#7099 [Zach Hadel / psychicpebbles / Charlie Dompler]
Charlie Puth - 36k - Crewe's Corner#4767
Charlie Scene (From Hollywood Undead) - 14k - ThatOneDuder710#2594 [Rapping]
Chase Atlantic - 500 Epoch - rejekts#0820
Chester Bennington (Linkin Park) - 79k - Cheech#8254 |
Chester Bennington (RVC) - 1k Epoch 40k Steps - sgsavu#0733
Chief Keef - 100k - candy#6483
Childish Gambino (RVC) - 1k Epoch - kalomaze#2983
Chris Brown - 105k - Sample.House#0737 [Sounds best using his lower register, when transposed down 1-2 semitones] |
Chris Brown (RVC) - 700 Epoch - RomeTheDaddy#4293
Chris Cornell - 7.4k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Comethazine - 1086 Epoch 25K - sgsavu#0733 [batch size 7, 161 - 9 second samples] [trained on: open mics, interviews, live freestyles]
Comethazine [Mixed Edition] - 1000 Epoch 64.3k - sgsavu#0733 [trained on everything from PURE edition + least amount of voice processing (556, highriser, etc) + Mixed edition sounds more agressive than PURE but has more artifacts and noise in the resulting audio] |
Comethazine [Pure Edition] - 1000 Epoch 43k - sgsavu#0733 [trained on clean acapellas/vocals from: interviews, open mics, live freestyles]
C.R.O - 42k - visarra#1117
CupcakKe - 100k - HuntyDarling#4808
DaBaby (RVC) - 1k Epoch 70k steps - sgsavu#0733
Danny Ocean - 34k - matias464#2068
Dave Mustaine (Megadeth) (RVC) - 1000 Epoch - trioskosmos#8731
David Bowie - 7.2k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Deku (Izuku Midoriya) (RVC) - 100 Epoch - Anon
Dem Jointz (RVC) - 4.6k - SamV1sion#5354
Deuce (From Hollywood Undead) (RVC) - 1K Epoch - sgsavu#0733
Digga D (RVC) - 1000 Epoch 5.6k Steps - arturocookinup#5078
Dillom - 12.8k - Xvalen#3936
Dio Brando (From JoJo's Bizzare Adventure) (RVC) - 10k Steps - nicegame#6990
Diomedes Diaz (Cacique) (RVC) - 200 Epoch - [El Cacique de la Junta]
Doja Cat - 163.2k - #7280
Don Toliver - 88k - Alei#0950 |
Don Toliver - 68k - Lightning McQueen#0001 [68k Cleaner/Better than 88k version]
Drake - 100k - Snoop Dogg#8709 |
Drake (RVC) - ???? - Snoop Dogg#8709
Dua Lipa - 72k - aimelody#5393
Duki - 116.8k - Andres0i#4229 [si lo van a probar usen audios sin tune y sin entonaciones, de resto no les va a servir] |
Duki - 75k - Labrador#6962 |
Duki - 1k - 0900#9787 |
Duki (RVC) - 250 Epoch - diegoAsdf#9942
Ed Sheeran (RVC) - 1000 Epoch - AIVERSE#5393
Eddie Vedder - 48.8k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
El Puto Coke - 10k - Vigo#2099
Eladio Carrión - 40k - blaise#9999
Elon Musk - 99K - Stephen5311#6349
Elton John - 14k - Frix#2580
Eminem (General Model v1) - 86k - Bowl#2016
Eminem (SLIM SHADY Edition) - 209k - ???????? |
Eminem (Slim Shady Era) - 400 Epoch 48k Steps - SpaceCypher#6133 |
Eminem (New Era) (RVC) - 1k Epoch - Bowl#2016 & TRB Harry$#7680
Enna Alouette (NIJISANJI EN) - 10k - dacoolkid44#4173
Eric Cartman - 10.2k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Fase Yoda - 50k - Kyume ☥ (Méry)#4518
Feid - 147k - CAMARA DE GTX#4459
Ferxxo - ???? - KHAKO#8845
Foda C (French Rapper) - 30k - Kyume ☥ (Méry)#4518
Frank Ocean - 400k - Yurboii#8420 [30kEpoch70minDataset] |
Frank Ocean (RVC) - 18.2k Steps, 210 Epoch - TheLosslessPlug#3202 |
Frank Ocean (RVC) - 500 Epoch - Hubert Paul Flatt#9804
Freddie Mercury - 300k - Bowl#2016 & Roberto89#2726 & musictrackcenter#4011 |
Freddie Mercury - 125k - jev217#8700 |
Freddie Mercury (RVC) - Unknown Steps - K7#4523 [Around 1000 epochs, kinda better than sovits model]
Future - 45k - candy#6483 |
Future (RVC) - 2.7k - arturocookinup#5078
Gawr Gura (Hololive EN) - 30k - dadcoolkid44#4173 |
Gawr Gura (RVC) - 126 Epoch - RaymondReddington#6845
George Harrison - ???? - ZGLM#6250 [batch size of 4,927 samples and 101 epochs]
George Michael (RVC) - 500 Epoch - clubbedsam#4419 [Trained on Crepe]
Giovanna Grigio (Chiquititas 2013 Era) - 31.2k - CrimsonZockt#2221
Goku (RVC) - ???? - nicegame#6990
Gunna - 123k - elijah#2251 [Sounds bad with high notes] |
Gunna (RVC) - 3.5k Steps - 1ski#4245
Haachama (Hololive JP) RVC - 1000 Epoch - dacoolkid44#4173 & mochikiri-chan#0665
Half Life 2 (Male 07) (RVC) - 1K Epoch 28K Steps - 💊 Lüh Minion 💉#1804
Harry Styles - 72k - Melatone#1344 |
Harry Styles - 56k - K7#4523
Hayley Williams (From Paramore) - 300k - Thompson#2472 |
Hayley Williams (From Paramore) (RVC) - 600 Epoch - owl#1313
Hef (RVC) - 250 Epoch 1362 Steps - arturocookinup#5078
Homer Simpson - 22k - AnthonyFandom70100#9529 [voiced by Dan Castellaneta]
Hoshimachi Suisei (Hololive JP) (RVC) - ???? - Shiro-chan#9415
Hozier (RVC) - 270 Epoch - Jatazgo#2719
Hyunjin (From Stray Kids) - ???? - Smile WRLD#9877
Ibai - 11k - blaise#9999
Ice Spice - ???? - ayydot#7545 |
Ice Spice (RVC) - 11k - Zeuz Makes Music#6014
Indio Solari - 60k - RedamOk#7021
Inugami Korone (Hololive JP) (RVC) Upd 5.2.23 - ???? dacoolkid44#4173 mochikiri-chan#0665
Irene (From Red Velvet) - 4k - Smile WRLD#9877
Isaac Kleiner (From Half-Life 2) - 500 Epoch - jakeH#5394
IU (RVC) - 1k Epoch 99k Steps - baloneyboy#4232 |
IU (RVC) - 800 Epoch - checkmate#2840
J Cole - 100k - #7280
Jaghit Singh (Indian Ghazal) (RVC) - 400 Epoch 48k Steps - SpaceCypher#6133
James Hetfield - 49.6k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Jay Kay (Jamiroquai lead singer) - 40k - l3af#3435
Jay Z - 54.4k - justinjohn-03#4987
Jamiroquai - 44k - ????
Jeff Lynne (Electric Light Orchestra) (RVC) - 325 Epoch - Jay#0152
Jennie Kim (From BLACKPINK) (RVC) - 300 Epoch - ???? |
Jennie Kim (From BLACKPINK) - 65k - hristy#0059
Jeon So-yeon (From (G)I-DLE) - 800 Steps - Smile WRLD#9877
Jhene Aiko - 61.6k - ariscult#6164 |
Jhene Aiko (RVC) - 175 Epoch - baloneyboy#4232
Jihyo (Twice) - 1.6k - Smile WRLD#9877
Jim James (My Morning Jacket) (RVC) - 5k - Jay#0152
Jimin (From BTS) - 24K - neoculture#4390
Jisoo (From BLACKPINK) - 113k - RadmirGrande#0544 |
Jisoo (From BLACKPINK) (RVC) - 250 Epoch - Moonkissed#1774 Arithyst#3931
Joba of BROCKHAMPTON - 15k - asher roth#3637
John F. Kennedy (JFK) (RVC) - 600 Epoch 53k Steps - Disc#0287
John Frusciante (RVC) - 1k Epoch - sgsavu#0733
John Lennon - 78k - Vlader#7108 |
John Lennon - 365k - Anon [Beatles AI Discord] |
John Lennon (1970 Era) (RVC) - 5k - Jay#0152
Joji (RVC) - 32k - MentosAndRice#8492
Jotaro Kujo (From JoJo's Bizzare Adventure) (RVC) - 15k Steps - nicegame#6990
Joy (From Red Velvet) (RVC) - 200 Epoch - bee#0069
Juice WRLD - 160k - ryyyy#5003 |
Juice WRLD (Agressive) - 28k - BigDRᗩCO$O#2129 |
Juice WRLD - 1k Epoch 15k Steps - sgsavu#0733
Julia Volkova (From t.A.T.u.) - 500 Epoch - JpopKARAOKE#6331
Jung Kook (RVC) - 4k Epoch - MentosAndRice#8492 [v3 APR 25 2023] |
Jung Kook - 5k - MentosAndRice#8492 |
Jung Kook (RVC) - 200 Epoch 350 steps - rejekts#0820 [70mb version, 200 Epoch @ 20 Batch Size, 35 clips] |
Jung Kook - 60k - Moonkissed#1774 & Arithyst#3931
Justin Bieber - 67k - AguacateDev#4071
K Suave (RVC) - 700 Epoch - checkmate#2840
Kai - Kim Jong-in (From Exo) - 34.4k Steps - YH#9495
Kanye West - 199.2k - Pyeon Yeongsun #5759 - **Internet Wide Release aka ye200k** |
Kanye West (RVC) - ???? - Wil#7050 [ran to 1000 epochs] |
Kanye West - 112k - ???? (Author said 100k and model is called yeversiontwo) |
Kanye West (RVC) - 233.3k Steps, 1000 epoch - Wil#7050
Katy Perry - 28k - RaulBlue#3655
Ken Carson (Only Interviews) - 52k - BigDRᗩCO$O#2129 |
Ken Carson (Rapping Vocals) - 59k - averycj#3997
Kendrick Lamar - 67.2k - Snoop Dogg#8709 |
Kendrick Lamar (RVC) - ???? - Snoop Dogg#8709 |
Kendrick Lamar - 100.2k - okcool#5237 [Might be overtrained]
Khea - 20.8k - NuokiFTW#0001
Kid Mess (Alpha) - 0.8k - Cowton#5872 & kesnomanaow#3304
Kidd Keo - 32k - NuokiFTW#0001
Kim Chaewon (From LE SSERAFIM) (Beta) - 500 Epoch - codebloodedgirl6#2315
Kim Garam (From LE SSERAFIM) (RVC) - 300 Epoch - codebloodedgirl6#2315
Kim Seokjin (From BTS) - 24k - neoculture#4390
Kim Taehyung - 24k - neoculture#4390
Kizaru - 45.6k - CrimsonZockt#2221
Krystal Jung (RVC) - 1008 Epoch - Shabi_Chats#0606 [Works better with high notes]
Kurt Cobain - 138.6k - #7280
Kurtains (RVC) - 500 Epoch - Autumn#4768
L-Gante - 12k - StarBoy#2512
La+ Darkness (Hololive JP) - 12k - dacoolkid44#4173 | La+ Darkness (Hololive JP) (RVC) - Updated 4.29.2023 - mochikiri-chan#0665 & dacoolkid44#4173
Lady Gaga - 14k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Lalisa Manoban - ??? - Smile WRLD#9877
Lana Del Rey - 100k - K7#4523 |
Lana Del Rey (RVC) - 1k Epoch 74k Steps - sgsavu#0733
Lauryn Hill - 45k - averycj#3997
Lena Katina ( From t.A.T.u.) (RVC) - 300 Epoch- JpopKARAOKE#6331
Liam Gallagher - 18.4k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Lil Baby (RVC) - 500 Epoch - arturocookinup#5078 [Batch Size: 20]
Lil Dicky (RVC) - 1000 Epoch - Carson#1111
Lil Nas X - 26K - riddle#3363
Lil Tracy - ???? - Sztef#7028
Lil Peep - 33k - Sztef#7028
Lil Uzi Vert - 80k - ShadowTB#8205 |
Lil Uzi Vert - 1k Epoch 37k Steps - sgsavu#0733 [batch size 6]
Lil Yachty - 10k Epoch 120k - game#0102
Lily (From NMIXX) (RVC) - 250 Epoch - jisoos cat#7462 [Works better with high notes]
Lisa (From BLACKPINK) (RVC) - 900 Epoch - checkmate#2840
Lisa Simpson - 22k - AnthonyFandom70100#9529 |
Lisa Simpson (RVC) - 250 Epoch - AnthonyFandom70100#9529
Liz (From IVE) - 800 steps - Smile WRLD#9877
Logic (RVC) - 1k Epoch 116k Steps - sgsavu#0733
Luis Miguel - 82.4k - jrbeat#4961
Luther (French Rapper) - 50k - Kyume ☥ (Méry)#4518
Maeve (From Paladins) - 1600 Epoch - wlrkt#2520
Maria Becerra - 122k - dariovelaam#3542
Mariah Angeliq - 10k - remix#7551
Marina Sena - 8.8k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Matt Bellamy (From Muse) (RVC) - 200 Epoch 61k Steps - Ryanz#0053
MCParodyVoice - ???? - TheEpicRock7#9557
Melanie Martinez - 72K - aimelody#5393 |
Melanie Martinez (RVC) - 1000 Epoch - AIVERSE#5393
Maria Mendonça - 10.4k - hugo97#5776
Mariah Carey (RVC) - 300 Epoch - fractalfantasy#2748
MF Doom - 45k - Mellon#2653
Michael Jackson - 83k - clubbedsam#4419 |
Michael Jackson (RVC) - 1k Epoch - premydaremy#2498 |
Michael Jackson - 150k - Nyxel#7778 |
Michael Jackson (RVC) - 1k Epoch - tea#6949 [Harsh Vocals]
Mikey Sawyer of Miss Fortune - 336k - mikeysawyermf#3327
Miko - ???? - ????
Miley Cyrus (RVC) - 750 Epoch - AIVERSE#5393
Mina Myoi (From TWICE) - 2k - ⭐ 𝓚𝓾𝓶𝓪 ⭐ ʕっ•ᴥ•ʔっ#0001
Mona Lisa - 10k - COMEHU#2094
MoonMan - 120k - ????
Mon Laferte (RVC) - 600 Epoch - AnotherNoName#3807
Mora - 73.6k - NuokiFTW#0001
Morad - 11k - blaise#9999
Mordecai (RVC) - 3.6k steps, 750 epochs - kalomaze#2983 [39 clips, 6 minutes long dataset]
Morgenshtern - 15k - lunnaholy#0147
Mori Calliope (Hololive EN) - 8.8k - dacoolkid44#4173
Myke Towers - 100k - Labrador#6962
Nas (King's Disease Era) (SVC) - 171k - bola#1593
NCT Haechan (SVC) - Unknown - ทับบค#2007
NCT Jaemin (RVC) - Unknown - ทับบค#2007
NCT Jeno (RVC) - 350 Epoch 11k Steps - ทับบค#2007
NCT Mark Lee (RVC) - Unknown - ทับบค#2007
NCT Renjun (RVC) - 250 Epoch 9k Steps - ทับบค#2007
Neyo - 80k - subraiz#4688 & NoRappersAllowed#1186
Nicky Jam - 25k - ????
Nicki Minaj - 64k - LMAO DEAD 😂😂😂#8206 |
Nicki Minaj - 27.2k - COMEHU#2094
Nicki Nicole - 120k - StarBoy#2512
Ninomae Ina'nis (Hololive EN) - 30k - dacoolkid44#4173
Nipsey Hussle - 100k - justinjohn-03#4897
NLE Choppa (RVC) - 1000 epochs 51k - sgsavu#0733 [trained on around 15 minutes of edited freestyles, open mics, interviews, and least vocal processed songs]
Notti Osama - 60k - averycj#3997 & fr1ends#0001
Obama - 50k - Nardicality
Oddcast Daniel (FROM MLG TTS Voice)(RVC) - 300 Epochs - analogspiderweb#7099 [Works best on lower pitch vocals.]
Oki (Oskar Kamiński) - 49.6k - CrimsonZockt#2221
Olivia Rodrigo - 12.8k - karol jozef pelin#2129 |
Olivia Rodrigo - 4k - tahaefe.ipekk#9926
Omar Rudberg - 100k - reee#2204
OptiJuegos - 100k - ştar#7068
Ozuna - 4.8k - ???? |
Ozuna - 4k - matias464#2068
Ozzy Osbourne (Young) (RVC) - 470 Epoch - ancientdeit#3609 [Black Sabbath to Sabotage Era & Blizzard Of Ozz]
oxxxymiron - 24K - Uker#8854
P!NK (RVC) - 1000 Epoch - AIVERSE#5393
Paloma Mami - 32k - Benja#4927
Patrick Star - 500 Epoch - Autumn#4768
Parappa The Rapper (Video Game Character) - 59k - nicegame#6990
Park Jimin (RVC) Demo - 16k - KaraBaby#3426
Patrick Warburton (RVC) - 200 Epoch - Samoa Noah#5570 [AKA Kronk from Emperor's new Groove and Joe Swanson]
Paul McCartney (SVC) - 200k - Albinator#8386 |
Paul McCartney (Young Era) (RVC) - 1k Epoch - kalomaze#2983 & Albinator#8386 [Trained on harvest pitch inference using the same dataset as the sovits Paul from Albinator]
Paul McCartney (1964 Era) (RVC) - 5k - Jay#0152
Paulo Londra - 100k - Milkitos03#5076 |
Paulo Londra - 10k - 𝖝𝖉𝖎𝖊𝖌𝖔𝖙𝖊#3978
Pekora - ???? - ????
Peso Pluma - 40k - NRM#5257
Peter Griffin (RVC) - 4.5k - Delik#0001
Phil Anselmo - 25k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Plankton (From SpongeBob) (RVC) - 500 Epoch - Hubert Paul Flatt#9804
Playboi Carti - 45k - Snoop Dogg#8709 [This is probably v2 or SVC edition|
Playboi Carti - 42k - Molo#0001 [Whole Lotta Red Era v2] |
Playboi Carti (Die Lit Era) - 18k - Zeuz Makes Music#6014 |
Playboi Carti v3 (RVC) - ???? - Snoop Dogg#8709 |
Playboi Carti - 46k - BigDRᗩCO$O#2129 [New Sessions Used]
Pop Smoke - 36.8k - sable#0001
Post Malone - 9.6k - Prod. Bad Dude#3218
Postal Dude (From Postal Game) - 2.5k - HuggingFace link to be added |
Postal Dude (From POSTAL 2) - 1K Epochs 25K Steps - 💊 Lüh Minion 💉#1804
Quasimoto - 50k - Bowl#2016
Quevedo - 28k - ALEXSZYT#0432
Ralph Kaminski - 48.8k - CrimsonZockt#2221 |
Ralph Kaminski(alt) - 25.6k - CrimsonZockt#2221
Rauw Alejandro - 4.8k - GOD_Tofer#6528
Rigby (RVC) 500 Epoch - Hubert Paul Flatt#9804
Rihanna - 200k - Seif#3218 & Provindo#4444 |
Rihanna (alt) - 75k - Seif#3218 & Provindo#4444 |
Rihanna (RVC) - ???? - Snoop Dogg#8709
Ringo Starr (From Beatles) - Unknown Steps - ZGLM#6250 [Beatles AI Discord]
Rivers Cuomo of Weezer (RVC) - 18k Steps, 140 Epoch - rthawk#1502
Rochy RD - 90k - Styl#6247
Rodrigo Barão (Barões Da Pisadinha) - 8k - Dimitri#7373 (Brazilian Portuguese)
Rosaliá - 35k - Styl#6247 |
Rosalia (RVC) - 1k Epoch 15k Steps - Styl#6247
Rose (From BLACKPINK) (RVC)- ???? - uji#8864
Rossa (Indonesian Singer) (RVC) - 350 Epoch - Hengky Wijaya#3599 [not quite good at high notes, at certain high note it comes lowered to the lower octave.] [350 Epoch, 20 Batch, RVC, trained in filtered voice, podcast, live performance]
Roxie Wegiel (13+5 Era) - 45.6k - CrimsonZockt#2221
Saiko - 13k - Smile WRLD#9877|
Saiko - 26.4k - blaise#9999 & m1n1#7342 |
Saiko - 55k - blaise#9999
Samuel L Jackson - 30k - Thompson#2472
Sarah Bonito (Kero Kero Bonito KKB) - 9k - Bwib#8693
SCARLXRD (RVC) - 300 Epoch - YETI#9058
Sean Leon - 3.15k - SamV1sion#5354
Selena Gomez (RVC) - 1000 Epoch - AIVERSE#5393
Sematary - 122k - kala#6494 (trained from Rainbow Bridge 1)
Seulgi Red Velvet - 3.2k - Smile WRLD#9877
Shakira (Classic Era) - 15k - Frix#2580 |
Shakira (Modern Era) (RVC) - 19.8K - kaan36875#0001
Sia (RVC) - 500 Epoch - owl#1313
Shiloh Dynasty - 3.3k - rejekts#0820
Sidhu Moosewala - 10k - Puneet#6616 |
Sidhu Moose Wala (RVC) - 220 Epoch - Sukh#0648 |
Sidhu Moose Wala - 60k - Frix#2580
Solar (From MAMAMOO) - 1.6k - ????
SOOBIN (From TOMORROW X TOGETHER) - 46K - neoculture#4390
Spongebob Squarepants (RVC) - Unkown Steps - kalomaze#2983 [1k epochs, dataset of 19 clips, trained on pm pitch method]
Stevie Ray Vaughan - 6.2k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Stevie Wonder - 31k - clubbedsam#4419
Stewie Griffin (RVC) - 4.5k - Delik#0001
SUGA (From BTS) - 21.6k - neoculture#4390
Sugarhill Ddot (RVC) - 150 Epoch - Notti Osama#1111 & dacoolkid44#4173
Summer Walker - 11k - ayydot#7545 |
Summer Walker - 400 Epoch - RomeTheDaddy#4293
SZA - 21k - ayydot#7545
Swae Lee - 231k - joman_g#9910
Taeyeon (RVC) - 72k - baloneyboi#4232 |
Taeyeon (FROM SNSD) - 800 Steps - Smile WRLD#9877
Takanashi Kiara (Hololive EN) - 10k - dacoolkid44#4173
Tay-K (RVC) - 300 Epoch - Notti Osama#1111
Taylor swift - 152k Steps, 7.6k Epoch - JohnnyJones#8867 [7.6k epochs at around 20 steps an epoch so 152k steps] |
Taylor Swift - 106.4k - ???? [Not the best but it does work good with dry vocals when it comes to hitting a bit higher notes] |
Taylor Swift (RVC) - 3.3k Epoch 101k Steps- Filthycasual#5666
TF2 Team Fortress 2 Demoman (RVC) - ???? - nicegame#6990
TF2 Team Fortress 2 Engineer (RVC) - ???? - nicegame#6990
TF2 Team Fortress 2 Heavy (RVC) - ???? - nicegame#6990
TF2 Team Fortress 2 Medic (RVC) - ???? - nicegame#6990
TF2 Team Fortress 2 Scout (RVC) - ???? - nicegame#6990
TF2 Team Fortress 2 Spy (RVC) - ???? - nicegame#6990
The Kid LAROI - 342k - michaell#1404 |
The Kid LAROI - 170k - sable#0001
The Stanley Parable [Narrator] - 4k 286 Epoch - sourcelocation#0001 |
The Stanley Parable [Narrator] (RVC) - 500 Epoch - jakeH#5394
The Weeknd - 94k - Maki Ligon#6713 |
The Weeknd v2 - 110k - lonelystar#4813 |
The Weeknd - 60K - lonelystar#4813 [Alt Version]
Thom Yorke (RVC) - 75 Epochs - ????
Tiago PZK - 55k - StarBoy#2512
Tim Maia - 319.2k - https://huggingface.co/marcoc2/so-vits-svc-4.0-models
Tom Waits (Raspy Voice) (RVC) - 600 Epoch 18K Steps - Disc#0287
Tory Lanez (RVC) - 700 Epoch - Rome#2527
Travis Scott - 100k - RoddyRogu#3360 |
Travis Scott - 77k - Snoop Dogg#8709 |
Travis Scott (RVC) - 6720 Epoch - Snoop Dogg#8709
Trippie Redd - 56k - ShadowTB#8205 [Includes a clustering model for clustering]
Troye Sivan - 36k - junjuncuti3#9962
Trump - 68k - joman_g#9910 |
Trump (alt) - 18.5k - Nardicality
Tyler The Creator - 60k - Snoop Dogg#8709
Vegeta (From Dragon Ball Z) (RVC) - 4.9k Steps - nicegame#6990 [DBZ]
Vergil (From Devil May Cry) - 1000 Epoch - just paps#6512
Wendy (From Red Velvet) - 800 Steps - Smile WRLD#9877
Whitney Houston - 33.6K - COMEHU#2094
will.i.am (RVC) - 3250 steps - SamV1sion#5354
Will Stenson - 210k - bruhmoment#7334
xQc - 25k - kyle#9690
XXXTentacion - 165k - Chakras#???? |
XXXTentacion - 55k - Angell#4859 |
XXXTENTACION (RVC) - 150 Epoch 14k Steps - ShadowTB#8205
Yeat - 60k - Vision#3184 [Go to https://medium.com/@vision3/yeat-2-0-model-status-19f47994385f for updates on ver 2.0!]
Yeonjun (From TXT) - 24K - neoculture#4390
Yoko Ono (RVC) - 4k - Jay#0152
Young Leosia - 45.6k - CrimsonZockt#2221
Young Thug - 279.2k - Monki#8033 |
Young Thug - 153k - #7280
YSY A - 40k - Raidener#3810 | [
-0.035501476377248764,
-0.016548575833439827,
-0.022372735664248466,
0.024712061509490013,
0.02640054002404213,
0.04496296867728233,
0.006673549301922321,
0.028433382511138916,
-0.040267836302518845,
0.038997307419776917,
0.051272593438625336,
0.012574386782944202,
0.0002668745582923293,
0... |
ArashEsk95/bert-base-uncased-finetuned-stsb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-18T22:48:03Z | ---
library_name: stable-baselines3
tags:
- RoombaAToB-from-behavior-cloning-fast-dist-reward
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-from-behavior-cloning-fast-dist-reward
type: RoombaAToB-from-behavior-cloning-fast-dist-reward
metrics:
- type: mean_reward
value: -101.89 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-from-behavior-cloning-fast-dist-reward**
This is a trained model of a **BC** agent playing **RoombaAToB-from-behavior-cloning-fast-dist-reward**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.023706573992967606,
-0.006691693793982267,
-0.023048097267746925,
0.04558524116873741,
0.04513177648186684,
0.014041067101061344,
-0.02282063104212284,
-0.015118740499019623,
-0.04911789298057556,
0.059752531349658966,
0.023974895477294922,
-0.011545737273991108,
0.006469107698649168,
-... |
Aravinth/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-18T22:58:59Z | ---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7960
- Answer: {'precision': 0.7169603524229075, 'recall': 0.8046971569839307, 'f1': 0.7582993593476993, 'number': 809}
- Header: {'precision': 0.36619718309859156, 'recall': 0.4369747899159664, 'f1': 0.39846743295019166, 'number': 119}
- Question: {'precision': 0.7883408071748879, 'recall': 0.8253521126760563, 'f1': 0.8064220183486238, 'number': 1065}
- Overall Precision: 0.7307
- Overall Recall: 0.7938
- Overall F1: 0.7609
- Overall Accuracy: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.6386 | 1.0 | 25 | 1.2949 | {'precision': 0.08352668213457076, 'recall': 0.08899876390605686, 'f1': 0.08617594254937162, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.36874571624400276, 'recall': 0.5051643192488263, 'f1': 0.42630744849445323, 'number': 1065} | 0.2628 | 0.3061 | 0.2828 | 0.5116 |
| 1.0433 | 2.0 | 50 | 0.8005 | {'precision': 0.5965447154471545, 'recall': 0.7255871446229913, 'f1': 0.6547685443390964, 'number': 809} | {'precision': 0.1111111111111111, 'recall': 0.058823529411764705, 'f1': 0.07692307692307691, 'number': 119} | {'precision': 0.6574487065120428, 'recall': 0.692018779342723, 'f1': 0.6742909423604757, 'number': 1065} | 0.6139 | 0.6678 | 0.6398 | 0.7293 |
| 0.6891 | 3.0 | 75 | 0.6695 | {'precision': 0.6335650446871897, 'recall': 0.788627935723115, 'f1': 0.7026431718061674, 'number': 809} | {'precision': 0.3246753246753247, 'recall': 0.21008403361344538, 'f1': 0.25510204081632654, 'number': 119} | {'precision': 0.7085862966175195, 'recall': 0.7671361502347418, 'f1': 0.7366997294860236, 'number': 1065} | 0.6616 | 0.7426 | 0.6998 | 0.7752 |
| 0.532 | 4.0 | 100 | 0.6270 | {'precision': 0.6573787409700722, 'recall': 0.7873918417799752, 'f1': 0.7165354330708661, 'number': 809} | {'precision': 0.2361111111111111, 'recall': 0.2857142857142857, 'f1': 0.25855513307984795, 'number': 119} | {'precision': 0.7153284671532847, 'recall': 0.828169014084507, 'f1': 0.7676240208877285, 'number': 1065} | 0.6620 | 0.7792 | 0.7158 | 0.7961 |
| 0.4184 | 5.0 | 125 | 0.6174 | {'precision': 0.6837160751565762, 'recall': 0.8096415327564895, 'f1': 0.7413695529145445, 'number': 809} | {'precision': 0.3063063063063063, 'recall': 0.2857142857142857, 'f1': 0.2956521739130435, 'number': 119} | {'precision': 0.7734657039711191, 'recall': 0.8046948356807512, 'f1': 0.7887712839392544, 'number': 1065} | 0.7102 | 0.7757 | 0.7415 | 0.8025 |
| 0.3264 | 6.0 | 150 | 0.6493 | {'precision': 0.6905537459283387, 'recall': 0.7861557478368356, 'f1': 0.7352601156069365, 'number': 809} | {'precision': 0.310126582278481, 'recall': 0.4117647058823529, 'f1': 0.35379061371841153, 'number': 119} | {'precision': 0.7713523131672598, 'recall': 0.8140845070422535, 'f1': 0.7921425308359983, 'number': 1065} | 0.7045 | 0.7787 | 0.7398 | 0.8008 |
| 0.2661 | 7.0 | 175 | 0.6587 | {'precision': 0.6857440166493236, 'recall': 0.8145859085290482, 'f1': 0.7446327683615819, 'number': 809} | {'precision': 0.32575757575757575, 'recall': 0.36134453781512604, 'f1': 0.3426294820717131, 'number': 119} | {'precision': 0.7720970537261699, 'recall': 0.8366197183098592, 'f1': 0.8030644434429923, 'number': 1065} | 0.7089 | 0.7993 | 0.7514 | 0.8038 |
| 0.2246 | 8.0 | 200 | 0.7115 | {'precision': 0.7111356119073869, 'recall': 0.7972805933250927, 'f1': 0.7517482517482517, 'number': 809} | {'precision': 0.2983425414364641, 'recall': 0.453781512605042, 'f1': 0.36, 'number': 119} | {'precision': 0.7891402714932126, 'recall': 0.8187793427230047, 'f1': 0.8036866359447005, 'number': 1065} | 0.7164 | 0.7883 | 0.7506 | 0.8074 |
| 0.1928 | 9.0 | 225 | 0.7130 | {'precision': 0.7094668117519043, 'recall': 0.8059332509270705, 'f1': 0.7546296296296295, 'number': 809} | {'precision': 0.3178294573643411, 'recall': 0.3445378151260504, 'f1': 0.33064516129032256, 'number': 119} | {'precision': 0.7908025247971145, 'recall': 0.8234741784037559, 'f1': 0.8068077276908925, 'number': 1065} | 0.7279 | 0.7878 | 0.7566 | 0.8042 |
| 0.1598 | 10.0 | 250 | 0.7375 | {'precision': 0.7242937853107345, 'recall': 0.792336217552534, 'f1': 0.756788665879575, 'number': 809} | {'precision': 0.375, 'recall': 0.42857142857142855, 'f1': 0.39999999999999997, 'number': 119} | {'precision': 0.788858939802336, 'recall': 0.8244131455399061, 'f1': 0.8062442607897153, 'number': 1065} | 0.7357 | 0.7878 | 0.7608 | 0.8099 |
| 0.1444 | 11.0 | 275 | 0.7719 | {'precision': 0.7027896995708155, 'recall': 0.8096415327564895, 'f1': 0.7524411257897761, 'number': 809} | {'precision': 0.34814814814814815, 'recall': 0.3949579831932773, 'f1': 0.3700787401574803, 'number': 119} | {'precision': 0.7825311942959001, 'recall': 0.8244131455399061, 'f1': 0.8029263831732967, 'number': 1065} | 0.7218 | 0.7928 | 0.7556 | 0.8008 |
| 0.1251 | 12.0 | 300 | 0.7758 | {'precision': 0.7133479212253829, 'recall': 0.8059332509270705, 'f1': 0.7568195008705745, 'number': 809} | {'precision': 0.38095238095238093, 'recall': 0.40336134453781514, 'f1': 0.39183673469387753, 'number': 119} | {'precision': 0.7880434782608695, 'recall': 0.8169014084507042, 'f1': 0.8022130013831259, 'number': 1065} | 0.7323 | 0.7878 | 0.7590 | 0.8077 |
| 0.1124 | 13.0 | 325 | 0.7878 | {'precision': 0.7150776053215078, 'recall': 0.7972805933250927, 'f1': 0.7539450613676213, 'number': 809} | {'precision': 0.38848920863309355, 'recall': 0.453781512605042, 'f1': 0.4186046511627907, 'number': 119} | {'precision': 0.7922312556458898, 'recall': 0.8234741784037559, 'f1': 0.8075506445672191, 'number': 1065} | 0.7337 | 0.7908 | 0.7612 | 0.8094 |
| 0.1077 | 14.0 | 350 | 0.7945 | {'precision': 0.7136612021857923, 'recall': 0.8071693448702101, 'f1': 0.7575406032482598, 'number': 809} | {'precision': 0.36619718309859156, 'recall': 0.4369747899159664, 'f1': 0.39846743295019166, 'number': 119} | {'precision': 0.7887197851387645, 'recall': 0.8272300469483568, 'f1': 0.8075160403299725, 'number': 1065} | 0.7295 | 0.7958 | 0.7612 | 0.8098 |
| 0.1001 | 15.0 | 375 | 0.7960 | {'precision': 0.7169603524229075, 'recall': 0.8046971569839307, 'f1': 0.7582993593476993, 'number': 809} | {'precision': 0.36619718309859156, 'recall': 0.4369747899159664, 'f1': 0.39846743295019166, 'number': 119} | {'precision': 0.7883408071748879, 'recall': 0.8253521126760563, 'f1': 0.8064220183486238, 'number': 1065} | 0.7307 | 0.7938 | 0.7609 | 0.8081 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.017409684136509895,
0.00908689759671688,
0.000006190996373334201,
0.04391948878765106,
0.010995769873261452,
0.021570872515439987,
-0.00011299559992039576,
-0.03054574318230152,
-0.04259303957223892,
0.034731026738882065,
0.03146332502365112,
-0.024363365024328232,
0.011179910972714424,
... |
ArcQ/gpt-experiments | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.91 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04981645569205284,
-0.0159827321767807,
-0.008769693784415722,
0.03640187904238701,
0.04100871458649635,
0.003062521805986762,
-0.02129991538822651,
-0.010282340459525585,
-0.0380900502204895,
0.05688010901212692,
0.024112550541758537,
-0.003350223647430539,
0.03232410177588463,
0.00294... |
Arcanos/1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
language:
- zh
tags:
- legal
- art
---
# 超級瑪利歐兄弟大電影線上看
哪裡可以《超級瑪利歐兄弟大電影》免費線上看?超級瑪利歐兄弟大電影線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《超級瑪利歐兄弟大電影》線上看、完整版小鴨 2023,(電影)超級瑪利歐兄弟大電影線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 超級瑪利歐兄弟大電影線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/502356)
➤[https://super4kuhdq.com/zh/movie/502356](https://super4kuhdq.com/zh/movie/502356)
●●可供下載,(超級瑪利歐兄弟大電影 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,超級瑪利歐兄弟大電影線上看完整版、超級瑪利歐兄弟大電影線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[The Super Mario Bros. Movie 2023]電影。線上看電影《超級瑪利歐兄弟大電影》的完整版。
## 《超級瑪利歐兄弟大電影》台湾上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
影片改编自任天堂经典游戏。企鹅冰雪王国遭到入侵能否逆袭?马力欧和他的兄弟路易吉如何拯救世界?童年经典大银幕重启,精彩冒险即将上演。
发布日期: 2023-04-05
运行时间: 92 分钟
类型: 动画, 冒险, 家庭, 奇幻, 喜剧
## 至于如何在没有广告的情况下免費線上看《超級瑪利歐兄弟大電影》?
在这里你可以《超級瑪利歐兄弟大電影》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《超級瑪利歐兄弟大電影》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《超級瑪利歐兄弟大電影 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
超級瑪利歐兄弟大電影
超級瑪利歐兄弟大電影線上看
超級瑪利歐兄弟大電影線上看小鴨
超級瑪利歐兄弟大電影免費線上看
超級瑪利歐兄弟大電影線上看
超級瑪利歐兄弟大電影2023電影
超級瑪利歐兄弟大電影線上看完整版
超級瑪利歐兄弟大電影台灣上映
超級瑪利歐兄弟大電影台灣上映時間 | [
-0.028448637574911118,
-0.016794933006167412,
-0.00016150275769177824,
0.030998390167951584,
0.05031505972146988,
-0.0020418271888047457,
-0.018605707213282585,
-0.001663541654124856,
-0.0421028807759285,
0.038491345942020416,
0.05846714228391647,
-0.012583177536725998,
0.057439740747213364,... |
Arghyad/Loki_small | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-large-finetuned-kinetics-finetuned-engine-subset-R2-K400-20230418_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-finetuned-kinetics-finetuned-engine-subset-R2-K400-20230418_3
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1972
- Accuracy: 0.2647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2700
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5562 | 0.02 | 55 | 2.5746 | 0.0221 |
| 2.4543 | 1.02 | 110 | 2.5524 | 0.0956 |
| 2.1443 | 2.02 | 165 | 2.5214 | 0.1103 |
| 1.6604 | 3.02 | 220 | 2.6415 | 0.1397 |
| 1.5847 | 4.02 | 275 | 2.9843 | 0.0735 |
| 1.4657 | 5.02 | 330 | 2.7476 | 0.1471 |
| 1.0823 | 6.02 | 385 | 3.0872 | 0.1471 |
| 1.0672 | 7.02 | 440 | 2.9539 | 0.2206 |
| 0.7241 | 8.02 | 495 | 3.5364 | 0.1103 |
| 0.6532 | 9.02 | 550 | 3.1972 | 0.2647 |
| 0.6476 | 10.02 | 605 | 4.1289 | 0.0735 |
| 0.3725 | 11.02 | 660 | 4.4710 | 0.0588 |
| 0.7363 | 12.02 | 715 | 4.9241 | 0.0662 |
| 0.3136 | 13.02 | 770 | 4.8217 | 0.1176 |
| 0.3154 | 14.02 | 825 | 4.2717 | 0.1838 |
| 0.309 | 15.02 | 880 | 4.9466 | 0.0588 |
| 0.3094 | 16.02 | 935 | 5.5394 | 0.0147 |
| 0.3333 | 17.02 | 990 | 5.0940 | 0.0956 |
| 0.2299 | 18.02 | 1045 | 6.3148 | 0.0074 |
| 0.2257 | 19.02 | 1100 | 5.3869 | 0.0588 |
| 0.255 | 20.02 | 1155 | 6.4134 | 0.0147 |
| 0.2335 | 21.02 | 1210 | 6.1413 | 0.0441 |
| 0.3507 | 22.02 | 1265 | 6.2911 | 0.0074 |
| 0.1463 | 23.02 | 1320 | 6.5273 | 0.0074 |
| 0.193 | 24.02 | 1375 | 6.6533 | 0.0074 |
| 0.1167 | 25.02 | 1430 | 6.8094 | 0.0 |
| 0.1168 | 26.02 | 1485 | 6.7632 | 0.0 |
| 0.0511 | 27.02 | 1540 | 7.0046 | 0.0074 |
| 0.1336 | 28.02 | 1595 | 7.2877 | 0.0 |
| 0.1518 | 29.02 | 1650 | 7.3102 | 0.0 |
| 0.1972 | 30.02 | 1705 | 7.1632 | 0.0 |
| 0.0605 | 31.02 | 1760 | 7.2970 | 0.0 |
| 0.1633 | 32.02 | 1815 | 7.3427 | 0.0 |
| 0.1902 | 33.02 | 1870 | 7.4095 | 0.0 |
| 0.132 | 34.02 | 1925 | 7.3169 | 0.0 |
| 0.1226 | 35.02 | 1980 | 7.4196 | 0.0074 |
| 0.115 | 36.02 | 2035 | 7.3248 | 0.0074 |
| 0.1348 | 37.02 | 2090 | 7.1318 | 0.0 |
| 0.1684 | 38.02 | 2145 | 7.6482 | 0.0 |
| 0.0722 | 39.02 | 2200 | 7.5944 | 0.0074 |
| 0.1155 | 40.02 | 2255 | 7.5615 | 0.0 |
| 0.1425 | 41.02 | 2310 | 7.6454 | 0.0074 |
| 0.1552 | 42.02 | 2365 | 7.4774 | 0.0074 |
| 0.1078 | 43.02 | 2420 | 7.3991 | 0.0074 |
| 0.1169 | 44.02 | 2475 | 7.3240 | 0.0 |
| 0.1438 | 45.02 | 2530 | 7.4133 | 0.0 |
| 0.1227 | 46.02 | 2585 | 7.4592 | 0.0 |
| 0.0716 | 47.02 | 2640 | 7.5590 | 0.0 |
| 0.2077 | 48.02 | 2695 | 7.5708 | 0.0 |
| 0.0731 | 49.0 | 2700 | 7.5710 | 0.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
| [
-0.052864812314510345,
-0.004498234950006008,
0.01718749850988388,
0.001998813124373555,
0.04649074003100395,
0.005842137616127729,
-0.0031735096126794815,
-0.02074049785733223,
-0.025571268051862717,
0.051351528614759445,
0.01852293871343136,
-0.007654000073671341,
0.01956828683614731,
0.... |
AriakimTaiyo/DialoGPT-small-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 116.20 +/- 63.15
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 64
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Yanrds/ppo-LunarLander-v2-from-scratch'
'batch_size': 16384
'minibatch_size': 256}
```
| [
-0.008564863353967667,
0.003961611073464155,
-0.016179032623767853,
0.015946390107274055,
0.058172836899757385,
-0.028475558385252953,
0.007389873266220093,
-0.03538810834288597,
-0.02721855603158474,
0.06669645011425018,
0.026667796075344086,
-0.02702317386865616,
-0.0013992596650496125,
... |
Aries/T5_question_generation | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: judithrosell/model_appsII
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# judithrosell/model_appsII
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Train Accuracy: 0.4889
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 0, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| nan | nan | 0.4889 | 0 |
| nan | nan | 0.4889 | 1 |
| nan | nan | 0.4889 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.03541596606373787,
-0.014494289644062519,
-0.020400097593665123,
0.02675800211727619,
0.03554781153798103,
0.01189838070422411,
-0.015072275884449482,
-0.01710919477045536,
-0.02682967111468315,
0.057590655982494354,
0.024860765784978867,
-0.0296479444950819,
0.019217681139707565,
0.038... |
Arina/Erine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-24T12:26:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9509677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2354
- Accuracy: 0.9510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0114 | 1.0 | 1907 | 0.9483 | 0.8577 |
| 0.2978 | 2.0 | 3814 | 0.2961 | 0.9368 |
| 0.097 | 3.0 | 5721 | 0.2422 | 0.9474 |
| 0.0393 | 4.0 | 7628 | 0.2349 | 0.9519 |
| 0.023 | 5.0 | 9535 | 0.2354 | 0.9510 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0+cu113
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.007161961402744055,
0.0030357015784829855,
-0.02556309662759304,
0.03968251496553421,
0.047448765486478806,
0.014661172404885292,
-0.03204745799303055,
-0.024731118232011795,
-0.02715419977903366,
0.05466020852327347,
0.006920835934579372,
-0.0125309769064188,
0.018375301733613014,
0.05... |
ArnaudPannatier/MLPMixer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1683.03 +/- 258.52
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| [
-0.04537633806467056,
-0.00046663597458973527,
-0.021509438753128052,
0.03229323774576187,
0.04309874773025513,
0.01797950081527233,
-0.018488004803657532,
-0.030852187424898148,
-0.03757733106613159,
0.06925389915704727,
0.021659335121512413,
0.0031308424659073353,
0.015009816735982895,
0... |
Arnold/common_voiceha | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: conversational
widget:
- text: "Below is an instruction that describes a task, paired with an input that provides further context. You, the Assistant, should generate a response as if it were an abstract for an academic or technical paper on the query along with a methodology. Then generate an Agent Reflection where you create a long form response as if from subject matter expert, be verbose, diligent, and creative in your application of knowledge, apply it through the lens of the response generated by the assistant. Look for flawed reasoning, faulty logic, or other mistakes in the method. Finally, generate a final response and method for the user with the Assistant abstract and Reflection analysis as augmentations to the generation\n\n### Instruction:\nTell me a joke about axolotls\n\n### Response:\n"
example_title: "Tell me a joke"
- text: "Below is an instruction that describes a task, paired with an input that provides further context. You, the Assistant, should generate a response as if it were an abstract for an academic or technical paper on the query along with a methodology. Then generate an Agent Reflection where you create a long form response as if from subject matter expert, be verbose, diligent, and creative in your application of knowledge, apply it through the lens of the response generated by the assistant. Look for flawed reasoning, faulty logic, or other mistakes in the method. Finally, generate a final response and method for the user with the Assistant abstract and Reflection analysis as augmentations to the generation\n\n### Instruction:\nExplain how chickens have affected global climate change.\n\n### Response:\n"
example_title: "chicken climate change"
---
- finetuned from `anon8231489123/vicuna-13b-GPTQ-4bit-128g`
- dataset: https://github.com/vaguenebula/AlpacaDataReflect/blob/main/alpaca_reflect_pruned.json
- wandb: https://wandb.ai/wing-lian/huggingface/runs/vuhppjj5/overview
| [
-0.004613801371306181,
0.0025246080476790667,
0.004507698584347963,
0.047850072383880615,
0.05038803070783615,
0.01412104070186615,
-0.016297172755002975,
-0.003638512222096324,
-0.04057280719280243,
0.05145636573433876,
0.034807078540325165,
-0.011606846936047077,
0.012882702052593231,
0.... |
ArpanZS/debug_squad | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2023-04-18T23:51:34Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.07 +/- 4.22
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Yanrds/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
-0.04364872723817825,
-0.001816644100472331,
0.010957304388284683,
0.03853512182831764,
0.02495800144970417,
-0.011536243371665478,
-0.010471470654010773,
-0.027674509212374687,
-0.03871229290962219,
0.05500559136271477,
0.036043282598257065,
0.0008158403215929866,
0.01909448765218258,
0.0... |
AshLukass/AshLukass | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-19T00:17:26Z | # `vocabtrimmer/xlm-roberta-base-xnli-en`
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
[xnli](https://huggingface.co/datasets/xnli) (en).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(en).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 84.57 | 84.57 | 84.57 | 84.56 | 84.57 | 84.68 | 84.57 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-en/raw/main/eval.json). | [
-0.022609388455748558,
-0.00283360225148499,
0.019670478999614716,
0.0015913320239633322,
0.013736134395003319,
0.014579422771930695,
-0.011309439316391945,
-0.029555492103099823,
-0.0504952035844326,
0.038043197244405746,
0.03032977320253849,
-0.05529852211475372,
-0.00004525114127318375,
... |
Augustvember/WokkaBot5 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### doggoart5 Dreambooth model trained by brunneis with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| [
-0.02802233397960663,
-0.006327098235487938,
-0.03214065730571747,
0.041307032108306885,
0.03557872027158737,
0.01665843464434147,
0.0031807629857212305,
0.001687026466242969,
-0.021130584180355072,
0.04296659678220749,
0.0261846873909235,
-0.0005179834552109241,
-0.030989808961749077,
0.0... |
Augustvember/test | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-04-19T02:10:24Z | ---
language:
- en
tags:
- causal-lm
license:
- cc-by-nc-sa-4.0
datasets:
- dmayhem93/ChatCombined
- tatsu-lab/alpaca
- nomic-ai/gpt4all_prompt_generations
- Dahoas/full-hh-rlhf
- jeffwan/sharegpt_vicuna
- HuggingFaceH4/databricks_dolly_15k
---
# StableLM-Tuned-Alpha
## Model Description
`StableLM-Tuned-Alpha` is a suite of 3B and 7B parameter decoder-only language models built on top of the `StableLM-Base-Alpha` models and further fine-tuned on various chat and instruction-following datasets.
## Usage
Get started chatting with `StableLM-Tuned-Alpha` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [50278, 50279, 50277, 1, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
do_sample=True,
stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
StableLM Tuned should be used with prompts formatted to `<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...`
The system prompt is
```
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`StableLM-Tuned-Alpha`) are licensed under the Non-Commercial Creative Commons license ([CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)), in-line with the original non-commercial license specified by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
## Training
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|------------|-------------|--------|-------|-----------------|
| 3B | 4096 | 16 | 32 | 4096 |
| 7B | 6144 | 16 | 48 | 4096 |
### Training Dataset
`StableLM-Tuned-Alpha` models are fine-tuned on a combination of five datasets:
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), which consists of 400k prompts and responses generated by GPT-4;
[Anthropic HH](https://huggingface.co/datasets/Dahoas/full-hh-rlhf), made up of preferences about AI assistant helpfulness and harmlessness;
[DataBricks Dolly](https://github.com/databrickslabs/dolly), comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;
and [ShareGPT Vicuna (English subset)](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), a dataset of conversations retrieved from [ShareGPT](https://sharegpt.com/).
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:
| Parameters | Batch Size | Learning Rate | Warm-up | Weight Decay | Betas |
|------------|------------|---------------|---------|--------------|-------------|
| 3B | 256 | 2e-5 | 50 | 0.01 | (0.9, 0.99) |
| 7B | 128 | 2e-5 | 100 | 0.01 | (0.9, 0.99) |
## Use and Limitations
### Intended Use
These models are intended to be used by the open-source community chat-like applications in adherence with the [CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the helpful hand of Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)).
## Citations
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtext
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
| [
-0.037376657128334045,
0.006198666989803314,
0.0053041111677885056,
0.05529505014419556,
0.058810532093048096,
0.0002655822318047285,
-0.01295460294932127,
-0.01948087103664875,
-0.015795551240444183,
0.06361152976751328,
0.035102859139442444,
-0.014717907644808292,
0.01929588057100773,
0.... |
Augustvember/wokka2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: slimhoods
---
### Slimhoods Dreambooth model trained by Grigsss with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
slimhoods (use that on your prompt)

| [
-0.03336658701300621,
-0.023259980604052544,
-0.020574357360601425,
0.024292897433042526,
0.03512072190642357,
0.016822056844830513,
-0.03890823945403099,
-0.003271804191172123,
-0.01622297242283821,
0.05196778103709221,
0.02085896022617817,
0.016242658719420433,
-0.0018658105982467532,
0.... |
Axon/resnet34-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- summarization
language:
- hi
widget:
- text: "I love AutoTrain 🤗"
datasets:
- prajwalpatankar/autotrain-data-hinsum1
co2_eq_emissions:
emissions: 11.16171264909688
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 50655120899
- CO2 Emissions (in grams): 11.1617
## Validation Metrics
- Loss: 2.648
- Rouge1: 8.796
- Rouge2: 2.976
- RougeL: 7.185
- RougeLsum: 7.641
- Gen Len: 17.798
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/prajwalpatankar/autotrain-hinsum1-50655120899
``` | [
-0.021940356120467186,
-0.021127499639987946,
0.005047050304710865,
0.02612200751900673,
0.029286552220582962,
0.016471050679683685,
-0.033935047686100006,
-0.02306649275124073,
-0.04223065823316574,
0.08085181564092636,
0.01822071149945259,
0.015891971066594124,
0.018566833809018135,
0.03... |
Axon/resnet50-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-04-19T02:44:12Z | ---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- WilliamWen/autotrain-data-unit_cata_io
co2_eq_emissions:
emissions: 1.228627476310992
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 50661120907
- CO2 Emissions (in grams): 1.2286
## Validation Metrics
- Loss: 0.014
- Accuracy: 0.997
- Precision: 0.895
- Recall: 0.938
- F1: 0.916
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/WilliamWen/autotrain-unit_cata_io-50661120907
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("WilliamWen/autotrain-unit_cata_io-50661120907", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("WilliamWen/autotrain-unit_cata_io-50661120907", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | [
-0.028152404353022575,
-0.021847346797585487,
-0.0042975167743861675,
0.029603976756334305,
0.04077900946140289,
0.03608652576804161,
-0.034382909536361694,
-0.010761233046650887,
-0.04591601714491844,
0.07757090032100677,
0.023577556014060974,
0.021293923258781433,
-0.005702146794646978,
... |
Aybars/XLM_Turkish | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1613737733641363456/6HH4412Q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shai Perednik 🇦🇷</div>
<div style="text-align: center; font-size: 14px;">@shaiss</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shai Perednik 🇦🇷.
| Data | Shai Perednik 🇦🇷 |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 816 |
| Short tweets | 239 |
| Tweets kept | 2171 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zv7zi2qo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shaiss's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/san3j8zw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/san3j8zw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shaiss')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| [
0.01045482698827982,
-0.03294379264116287,
0.003079561050981283,
0.03882709518074989,
0.05016874521970749,
0.011237602680921555,
-0.029081258922815323,
-0.009111168794333935,
-0.03619541600346565,
0.038284480571746826,
-0.007151734083890915,
-0.004866788629442453,
0.0058477893471717834,
0.... |
Ayham/distilbert_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-04-19T03:15:05Z | ---
license: apache-2.0
language:
- en
metrics:
- rouge
library_name: transformers
pipeline_tag: summarization
---
Summarize similar sentences for Amazon reviews | [
-0.01601385697722435,
-0.004270001780241728,
-0.0030447577591985464,
0.03191488981246948,
0.05108669400215149,
0.01395797822624445,
-0.020028503611683846,
0.029101036489009857,
-0.0622139647603035,
0.05455093830823898,
0.05772298201918602,
0.016926197335124016,
-0.006446147803217173,
0.026... |
Ayham/distilbert_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2023-04-19T03:18:48Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cybersecurity_ner-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cybersecurity_ner-v2
This model is a fine-tuned version of [sudipadhikari/cybersecurity_ner-v2](https://huggingface.co/sudipadhikari/cybersecurity_ner-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1566
- Precision: 0.6414
- Recall: 0.6325
- F1: 0.6369
- Accuracy: 0.9666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 176 | 0.1432 | 0.6236 | 0.6036 | 0.6135 | 0.9657 |
| No log | 2.0 | 352 | 0.1433 | 0.6655 | 0.6058 | 0.6342 | 0.9644 |
| 0.0341 | 3.0 | 528 | 0.1428 | 0.6124 | 0.6229 | 0.6176 | 0.9659 |
| 0.0341 | 4.0 | 704 | 0.1550 | 0.6345 | 0.6175 | 0.6259 | 0.9659 |
| 0.0341 | 5.0 | 880 | 0.1566 | 0.6414 | 0.6325 | 0.6369 | 0.9666 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.03175478056073189,
-0.015246350318193436,
0.0035302562173455954,
0.01136829610913992,
0.028389837592840195,
0.01753394305706024,
-0.010612505488097668,
-0.009325277991592884,
-0.047003183513879776,
0.06944159418344498,
0.04021892696619034,
-0.015917452052235603,
0.006562697701156139,
0.... |
Ayham/xlmroberta_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- xFahrenheit/autotrain-data-mbart25-3000-hin-en
co2_eq_emissions:
emissions: 18.562163681439518
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 50671120937
- CO2 Emissions (in grams): 18.5622
## Validation Metrics
- Loss: 2.166
- Rouge1: 24.690
- Rouge2: 9.961
- RougeL: 19.170
- RougeLsum: 21.730
- Gen Len: 77.668
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/xFahrenheit/autotrain-mbart25-3000-hin-en-50671120937
``` | [
-0.023965956643223763,
-0.016561392694711685,
0.006270712241530418,
0.030657749623060226,
0.02693937160074711,
0.017336783930659294,
-0.03711792081594467,
-0.021595971658825874,
-0.050150129944086075,
0.07779960334300995,
0.015639696270227432,
0.024730419740080833,
0.010552016086876392,
0.... |
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2023-04-19T03:54:52Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: HaiderAUT/pyramid_colab
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| [
-0.05098938196897507,
0.0008307765820063651,
-0.003047002712264657,
0.050254106521606445,
0.02490086480975151,
0.028996776789426804,
-0.013586091808974743,
-0.02307792752981186,
-0.002292275195941329,
0.050111740827560425,
0.02378307841718197,
-0.01080943550914526,
0.007593970280140638,
0.... |
Ayham/xlnetgpt2_xsum7 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: byt5-small-wikipron-eng-latn-us-broad-p2g
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-wikipron-eng-latn-us-broad-p2g
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2595
- Per: 0.4628
- Gen Len: 8.4996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.4797 | 1.0 | 382 | 0.4371 | 0.6951 | 8.4302 |
| 0.4823 | 2.0 | 764 | 0.3543 | 0.5974 | 8.4338 |
| 0.3878 | 3.0 | 1146 | 0.3081 | 0.545 | 8.4394 |
| 0.3378 | 4.0 | 1528 | 0.2904 | 0.518 | 8.449 |
| 0.3061 | 5.0 | 1910 | 0.2736 | 0.5004 | 8.4612 |
| 0.2823 | 6.0 | 2292 | 0.2664 | 0.4893 | 8.4734 |
| 0.265 | 7.0 | 2674 | 0.2626 | 0.4747 | 8.4721 |
| 0.2502 | 8.0 | 3056 | 0.2612 | 0.4697 | 8.4945 |
| 0.2388 | 9.0 | 3438 | 0.2592 | 0.4633 | 8.489 |
| 0.231 | 10.0 | 3820 | 0.2595 | 0.4628 | 8.4996 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.1.dev0
- Tokenizers 0.13.2
| [
-0.02017839066684246,
-0.005601097829639912,
-0.00806775689125061,
0.034363724291324615,
0.0178048238158226,
0.0021606425289064646,
-0.021890200674533844,
-0.002434328431263566,
-0.045553337782621384,
0.05454184487462044,
0.011755023151636124,
-0.03685236722230911,
0.0024254387244582176,
0... |
Aymene/opus-mt-en-ro-finetuned-en-to-ro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: clonex
---
### clonex Dreambooth model trained by Grigsss with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
clonex (use that on your prompt)

| [
-0.01946292445063591,
-0.025914527475833893,
-0.023244142532348633,
0.03734852746129036,
0.03498868644237518,
0.030682727694511414,
-0.04138082638382912,
-0.022566691040992737,
-0.01316286064684391,
0.04224100708961487,
0.02375679835677147,
0.00895006489008665,
-0.02577025629580021,
0.0404... |
Ayu/Shiriro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
tags:
- causal-lm
license: cc-by-nc-sa-4.0
datasets:
- dmayhem93/ChatCombined
- tatsu-lab/alpaca
- nomic-ai/gpt4all_prompt_generations
- Dahoas/full-hh-rlhf
- jeffwan/sharegpt_vicuna
- HuggingFaceH4/databricks_dolly_15k
---
# StableLM-Tuned-Alpha
## Model Description
`StableLM-Tuned-Alpha` is a suite of 3B and 7B parameter decoder-only language models built on top of the `StableLM-Base-Alpha` models and further fine-tuned on various chat and instruction-following datasets.
## Usage
Get started chatting with `StableLM-Tuned-Alpha` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [50278, 50279, 50277, 1, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
do_sample=True,
stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
StableLM Tuned should be used with prompts formatted to `<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...`
The system prompt is
```
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`StableLM-Tuned-Alpha`) are licensed under the Non-Commercial Creative Commons license ([CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)), in-line with the original non-commercial license specified by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
## Training
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|------------|-------------|--------|-------|-----------------|
| 3B | 4096 | 16 | 32 | 4096 |
| 7B | 6144 | 16 | 48 | 4096 |
### Training Dataset
`StableLM-Tuned-Alpha` models are fine-tuned on a combination of five datasets:
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), which consists of 400k prompts and responses generated by GPT-4;
[Anthropic HH](https://huggingface.co/datasets/Dahoas/full-hh-rlhf), made up of preferences about AI assistant helpfulness and harmlessness;
[DataBricks Dolly](https://github.com/databrickslabs/dolly), comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;
and [ShareGPT Vicuna (English subset)](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), a dataset of conversations retrieved from [ShareGPT](https://sharegpt.com/).
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:
| Parameters | Batch Size | Learning Rate | Warm-up | Weight Decay | Betas |
|------------|------------|---------------|---------|--------------|-------------|
| 3B | 256 | 2e-5 | 50 | 0.01 | (0.9, 0.99) |
| 7B | 128 | 2e-5 | 100 | 0.01 | (0.9, 0.99) |
## Use and Limitations
### Intended Use
These models are intended to be used by the open-source community chat-like applications in adherence with the [CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the helpful hand of Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)).
## Citations
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtext
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
| [
-0.03724132478237152,
0.006628660950809717,
0.0053359284065663815,
0.055032871663570404,
0.05939271301031113,
-0.0002712240384425968,
-0.012152344919741154,
-0.01881713978946209,
-0.016204355284571648,
0.06347256898880005,
0.0346374437212944,
-0.01566160097718239,
0.019499054178595543,
0.0... |
AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2985
- Accuracy: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3732 | 0.25 | 150 | 1.0708 | 0.6143 |
| 0.9248 | 1.25 | 300 | 0.6273 | 0.7429 |
| 0.3947 | 2.25 | 450 | 0.3035 | 0.8286 |
| 0.4861 | 3.25 | 600 | 0.3107 | 0.8714 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.0544198639690876,
-0.01181795820593834,
0.015061660669744015,
0.02254568040370941,
0.03841909021139145,
0.01997472532093525,
-0.008466715924441814,
-0.019731080159544945,
-0.03348751366138458,
0.04280225560069084,
0.020605148747563362,
-0.01820170320570469,
0.008842026814818382,
0.03535... |
Azaghast/GPT2-SCP-Descriptions | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ## ดูผลบอลเมื่อคืน รับชมได้ง่าย ๆ ผ่านเว็บเว็บแทงบอลออนไลน์
คุณสามารถดู[ผลบอลเมื่อคืน](https://tarangball.net/)ได้ง่าย ๆ โดยการเข้าไปที่เว็บไซต์ของเว็บแทงบอลออนไลน์ที่คุณสมัครไว้ ซึ่งบางเว็บไซต์อาจมีบริการดูผลบอลเมื่อคืนให้ฟรี หรืออาจมีการเสียค่าบริการเล็กน้อยในการรับชม ดังนั้นคุณควรตรวจสอบกับเว็บไซต์ที่คุณสมัครว่ามีบริการดูผลบอลเมื่อคืนอย่างไรก่อนที่จะเข้าไปดู การดูผลบอลเมื่อคืนยังมีแอปพลิเคชันดูผลบอลที่สามารถดาวน์โหลดได้ฟรีในระบบปฏิบัติการของโทรศัพท์มือถือทั้ง iOS และ Android ซึ่งคุณสามารถดาวน์โหลดและใช้งานได้ง่าย ๆ และสะดวกต่อการติดตามผลบอลเมื่อคืนของทีมที่คุณสนใจได้อย่างรวดเร็ว อย่างไรก็ตาม คุณควรตรวจสอบดูว่าแอปพลิเคชันดังกล่าวมีความเชื่อถือได้หรือไม่ก่อนใช้งานด้วย
## การดูผลบอลเมื่อคืนสามารถติดตามข่าวสารเกี่ยวกับทีมโปรดได้อีกด้วย
การดูผลบอลเมื่อคืนแล้ว คุณยังสามารถติดตามข่าวสารเกี่ยวกับทีมโปรดของคุณได้ผ่านเว็บแทงบอลออนไลน์อีกด้วย ซึ่งบางเว็บไซต์อาจมีบทวิเคราะห์และเทคนิคการเล่นบอลที่ช่วยให้คุณเข้าใจและวิเคราะห์การแข่งขันได้ดียิ่งขึ้น ที่สำคัญยังมีการแข่งขันฟุตบอลอื่น ๆ ที่สามารถดูได้ผ่านเว็บแทงบอลออนไลน์ เช่น การแข่งขันลีกหลักของแต่ละประเทศ เช่น พรีเมียร์ลีกของอังกฤษ ลาลีกาของสเปน และเซเรียอาของอิตาลี เป็นต้น นอกจากนี้ยังมีการแข่งขันในระดับสูงอื่น ๆ เช่น แชมเปี้ยนส์ลีก ยูฟ่า แชมเปี้ยนส์ลีก และได้รับความนิยมสูงในทั่วโลก
### การดูผลบอลผ่านเว็บยังสามารถเดิมพันแทงบอลได้ทุกลีกที่ต้องการ
นอกจากการติดตามผลบอลเมื่อคืนและข่าวสารเกี่ยวกับบอลผ่านเว็บแทงบอลออนไลน์ ยังมีบริการอื่น ๆ ที่สามารถใช้ได้บนเว็บไซต์นี้อีก เช่นการแทงบอลออนไลน์ ซึ่งเป็นกิจกรรมที่เป็นที่นิยมสูงในวงการพนันออนไลน์ โดยคุณสามารถเลือกแทงบอลได้ในลีกที่ต้องการ และเลือกตัวเลือกที่ต้องการได้ตามความเหมาะสมกับความคิดของคุณ การแทงบอลออนไลน์ยังมีรางวัลและโปรโมชั่นต่าง ๆ ที่มีอยู่บนเว็บไซต์ ซึ่งสามารถเพิ่มโอกาสในการชนะของคุณได้ นอกจากนี้ เว็บแทงบอลออนไลน์ยังมีการบริการอื่น ๆ อีกมากมาย เช่น การเล่นคาสิโนออนไลน์ การเดิมพันกีฬาอื่น ๆ และเกมอื่น ๆ ที่สามารถเล่นได้ผ่านเว็บไซต์เดียวกัน โดยคุณสามารถเข้าไปสมัครสมาชิกเพื่อใช้บริการนี้ได้ง่าย ๆ ด้วยการกรอกข้อมูลส่วนตัวและฝากเงินเข้าบัญชีผ่านเว็บไซต์โดยง่าย
การดูผลบอลเมื่อคืนผ่านเว็บแทงบอลออนไลน์จะต้องคำถึงถึงความปลอดภัยของการใช้บริการ คุณควรเลือกเว็บไซต์ที่มีความน่าเชื่อถือและได้รับการรับรองจากองค์กรที่เชื่อถือได้ เช่น มีการรับรองจากการพนันออนไลน์ และการปฏิบัติตามกฎหมายของรัฐบาล ที่สามารถช่วยเพิ่มความมั่นใจในการใช้บริการของเว็บไซต์นั้น ๆ ได้ | [
0.0033471831120550632,
-0.019464103505015373,
-0.005062560085207224,
0.012842983938753605,
0.023172713816165924,
0.013067588210105896,
-0.009168910793960094,
0.01379384845495224,
-0.04191123694181442,
0.05603771656751633,
0.02360149845480919,
-0.02482033707201481,
0.040319349616765976,
0.0... |
Azizun/Geotrend-10-epochs | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-04-19T04:52:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: covid-fakenews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-fakenews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0409
- Accuracy: 0.9905
- F1: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.04481479153037071,
-0.009332441724836826,
-0.020536992698907852,
0.03630628436803818,
0.04945153743028641,
0.026026984676718712,
-0.02583927847445011,
-0.01750601828098297,
-0.025892846286296844,
0.06014983728528023,
0.04203319177031517,
0.005158915650099516,
0.01434745267033577,
0.0171... |
Azura/data | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - sayakpaul/xformer-custom-diffusion
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.


For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
| [
-0.03065774217247963,
-0.020762000232934952,
-0.014741543680429459,
0.026021528989076614,
0.025681082159280777,
0.01958303153514862,
-0.0029344072099775076,
-0.008720692247152328,
0.004913166165351868,
0.045939281582832336,
0.012254148721694946,
-0.0021318914368748665,
0.0060884831473231316,... |
Azuris/DialoGPT-medium-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased_finetuned_olid_a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_olid_a
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3681
- Accuracy: 0.8512
- F1-macro: 0.8034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.4827 | 1.0 | 207 | 0.3716 | 0.8570 | 0.8113 |
| 0.39 | 2.0 | 414 | 0.3681 | 0.8512 | 0.8034 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
-0.022434577345848083,
0.0019633921328932047,
-0.021507620811462402,
0.03476366773247719,
0.04965502768754959,
0.020081637427210808,
-0.021518072113394737,
-0.018543921411037445,
-0.04870827868580818,
0.06680295616388321,
0.034978706389665604,
-0.02516549453139305,
0.012509222142398357,
0.... |
Azuris/DialoGPT-small-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2023-04-19T05:03:50Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| [
-0.030479364097118378,
0.016677316278219223,
0.003655519802123308,
0.010180559940636158,
0.04427102580666542,
-0.017677977681159973,
-0.022366870194673538,
-0.01704283617436886,
-0.029496440663933754,
0.08382560312747955,
0.01591019332408905,
-0.00796736404299736,
0.012399387545883656,
0.0... |
BSC-LT/gpt2-large-bne | [
"pytorch",
"gpt2",
"text-generation",
"es",
"dataset:bne",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-04-19T05:27:34Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: azuki
---
### azuki Dreambooth model trained by Grigsss with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
azuki (use that on your prompt)

| [
-0.028544623404741287,
-0.03769807890057564,
-0.016090506687760353,
0.030004912987351418,
0.02853906899690628,
0.016923844814300537,
-0.03166300803422928,
-0.023753562942147255,
-0.02235466241836548,
0.05433996766805649,
0.030050747096538544,
0.018840476870536804,
-0.021834781393408775,
0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.