modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
|---|---|---|---|---|---|---|
BigBoy/model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T13:44:12Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tmvar_0.0001_0404_ES6_strict_tok1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmvar_0.0001_0404_ES6_strict_tok1
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1472
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.3183 | 0.49 | 25 | 0.2344 | 0.0 | 0.0 | 0.0 | 0.9555 |
| 0.232 | 0.98 | 50 | 0.2467 | 0.0 | 0.0 | 0.0 | 0.9555 |
| 0.2357 | 1.47 | 75 | 0.2341 | 0.0 | 0.0 | 0.0 | 0.9555 |
| 0.2245 | 1.96 | 100 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.9555 |
| 0.1778 | 2.45 | 125 | 0.1339 | 0.0 | 0.0 | 0.0 | 0.9555 |
| 0.137 | 2.94 | 150 | 0.1222 | 0.0 | 0.0 | 0.0 | 0.9582 |
| 0.1146 | 3.43 | 175 | 0.1339 | 0.0 | 0.0 | 0.0 | 0.9625 |
| 0.1215 | 3.92 | 200 | 0.1472 | 0.0 | 0.0 | 0.0 | 0.9561 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/Flowberta
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| 2023-04-06T13:49:39Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.30 +/- 21.42
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/FormalRobertaa
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
|
BigSalmon/GPTHeHe
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
license: mit
language:
- cs
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Fine-tuned multilingual BART model for Czech Grammatical Error Correction.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Satoru Katsumata
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** Czech
- **License:** MIT License
- **Finetuned from model [optional]:** Fairseq multilingual BART-large ([mbart.CC25](https://github.com/Katsumata420/generic-pretrained-GEC/tree/master/mBART-GEC/examples/mbart#pre-trained-models))
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Katsumata420/generic-pretrained-GEC
- **Paper [optional]:** [Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model.](https://aclanthology.org/2020.aacl-main.83/)
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Since this model was trained with fairseq, fairseq must be used during inference as well.
More details can be found in the [README](https://github.com/Katsumata420/generic-pretrained-GEC/blob/master/mBART-GEC/README.md).
This fine-tuned model must be used with a binary file.
The binary file can be downloaded [here](https://drive.google.com/drive/folders/1oECT9q06j9r0whKmp8cqgpzvXINFutoX?usp=share_link).
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
See this [README](https://github.com/Katsumata420/generic-pretrained-GEC/blob/master/mBART-GEC/HOW_TO_REPRODUCE.md).
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- m2scorer
- https://www.comp.nus.edu.sg/~nlp/conll14st.html
- metrics
- Precision
- Recall
- F0.5
### Results
This model achieved the following results for AKCES-GEC test data.
- Precision: 75.75
- Recall: 61.41
- F0.5: 72.37
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bib
@inproceedings{katsumata2020AACL,
title = {Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model},
author = {Satoru Katsumata and Mamoru Komachi},
booktitle = {Proceedings of AACL-IJCNLP 2020}
year = {2020},
}
```
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
Satoru Katsumata
## Model Card Contact
[More Information Needed]
|
BigSalmon/GPTNeo350MInformalToFormalLincoln
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlnet-large-cased-ner-food-recipe-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-ner-food-recipe-v2
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1478
- Precision: 0.8033
- Recall: 0.8867
- F1: 0.8429
- Accuracy: 0.9708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.5 | 400 | 0.1619 | 0.6591 | 0.8147 | 0.7287 | 0.9507 |
| 0.4091 | 1.01 | 800 | 0.1488 | 0.7832 | 0.8762 | 0.8271 | 0.9689 |
| 0.1678 | 1.51 | 1200 | 0.1538 | 0.8116 | 0.8862 | 0.8473 | 0.9712 |
| 0.1452 | 2.01 | 1600 | 0.1374 | 0.7638 | 0.8653 | 0.8114 | 0.9652 |
| 0.1359 | 2.51 | 2000 | 0.1450 | 0.7837 | 0.8858 | 0.8316 | 0.9678 |
| 0.1359 | 3.02 | 2400 | 0.1403 | 0.778 | 0.8853 | 0.8282 | 0.9676 |
| 0.1143 | 3.52 | 2800 | 0.1515 | 0.8128 | 0.8812 | 0.8456 | 0.9721 |
| 0.1189 | 4.02 | 3200 | 0.1420 | 0.8069 | 0.8862 | 0.8447 | 0.9711 |
| 0.1165 | 4.52 | 3600 | 0.1460 | 0.7861 | 0.8848 | 0.8325 | 0.9687 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/GPTNeo350MInformalToFormalLincoln4
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.71 +/- 12.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln6
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| null |
---
language:
- zh
tags:
- glm
- chatgpt
---
Link to github: [here](https://github.com/sunzeyeah/RLHF)
---
本仓库由[THUDM/glm-large-chinese](https://huggingface.co/THUDM/glm-large-chinese) fork而来,原仓库实现了PyTorch版本的GLM模型,该模型有3.5亿参数量,模型权重文件以FP32格式存储。
本仓库在原始代码的基础上进行了部分调整,以支持ChatGPT训练pipeline,具体实现可参考:[sunzeyeah/RLHF](https://github.com/sunzeyeah/RLHF).
This repository is forked from [THUDM/glm-large-chinese](https://huggingface.co/THUDM/glm-large-chinese) that contains PyTorch implementation of GLM model with 350 million parameters pretrained weights (FP32 precision).
It is slightly different from the original GLM implementation to support the ChatGPT training pipeline in this github repo: [sunzeyeah/RLHF](https://github.com/sunzeyeah/RLHF).
---
# Model description
GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
Please refer to our paper for a detailed description of GLM:
[GLM: General Language Model Pretraining with Autoregressive Blank Infilling](https://arxiv.org/abs/2103.10360) (ACL 2022)
Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution)
Find more examples in our [Github repo](https://github.com/THUDM/GLM).
`glm-10b-chinese` is pretrained on the [WuDaoCorpora](https://www.sciencedirect.com/science/article/pii/S2666651021000152) dataset. It has 48 transformer layers, with hidden size 4096 and 64 attention heads in each layer. The model is pretrained with autoregressive blank filling objectives designed for natural language understanding, seq2seq, and language modeling.
---
# Usage (Text Generation)
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sunzeyeah/glm-350M-chinese", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("sunzeyeah/glm-350M-chinese", trust_remote_code=True)
model = model.half().cuda()
max_length = 512
prompt = "我不能确定对方是不是喜欢我,我却想分分秒秒跟他在一起,有谁能告诉我如何能想他少一点"
prefix = "回答:"
encoded_prompt = tokenizer(prompt, prefix + tokenizer.mask_token)
prompt_length = len(encoded_prompt['input_ids'])
encoded_dict = tokenizer(prompt, prefix + tokenizer.mask_token,
max_length=min(prompt_length, max_length),
truncation="only_first",
return_tensors="pt",
return_token_type_ids=False)
max_gen_length = max_length - encoded_dict['input_ids'].shape[1]
inputs = tokenizer.build_inputs_for_generation(encoded_dict, max_gen_length=max_gen_length, padding=True)
inputs = inputs.cuda()
outputs = model.generate(**inputs,
max_new_tokens=max_gen_length,
eos_token_id=tokenizer.eop_token_id,
pad_token_id=tokenizer.pad_token_id,
do_sample=False,
num_return_sequences=1,
top_p=0.8,
temperature=1.0)
results = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(results)
```
|
BigSalmon/GoodMaskResults
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| 2023-04-06T14:12:17Z
|
---
language:
- zh
tags:
- pangu
- chatgpt
---
Link to github: [here](https://github.com/sunzeyeah/RLHF)
---
# Model Description
Pangu-α is proposed by a joint technical team headed by PCNL. It was first released in [this repository](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha) It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
This repository contains PyTorch implementation of PanGu model with 2.6 billion parameters pretrained weights (FP32 precision).
It is slightly different from the [original pangu implementation](https://huggingface.co/imone/pangu_2_6B) to support the ChatGPT training pipeline in this github repo: [sunzeyeah/RLHF](https://github.com/sunzeyeah/RLHF).
---
|
BigSalmon/InformalToFormalLincoln14
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-06T14:14:20Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Variome_0.0001_0404_ES6_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Variome_0.0001_0404_ES6_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1843
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.4144 | 0.13 | 25 | 0.1849 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1834 | 0.26 | 50 | 0.1818 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1924 | 0.39 | 75 | 0.1828 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1806 | 0.52 | 100 | 0.1817 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1699 | 0.65 | 125 | 0.1863 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1783 | 0.79 | 150 | 0.1812 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1747 | 0.92 | 175 | 0.1816 | 0.0 | 0.0 | 0.0 | 0.9759 |
| 0.1583 | 1.05 | 200 | 0.1843 | 0.0 | 0.0 | 0.0 | 0.9759 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln22
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| 2023-04-06T14:18:54Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2251 | 1.0 | 5533 | 1.1679 |
| 0.9612 | 2.0 | 11066 | 1.1375 |
| 0.7582 | 3.0 | 16599 | 1.1607 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln24
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-06T14:21:44Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 515.00 +/- 147.46
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mohsin-x-zafar -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mohsin-x-zafar -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mohsin-x-zafar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BigSalmon/MrLincoln
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| 2023-04-06T14:26:20Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: alkiskoudounas/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/MrLincoln12
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -149.14 +/- 89.62
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Senura/PPO'
'batch_size': 512
'minibatch_size': 128}
```
|
BigSalmon/MrLincoln125MNeo
|
[
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| 2023-04-06T14:34:50Z
|
---
language: id
tags:
- indobert
- indobenchmark
---
## How to use
### Load model and tokenizer
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-pos")
model = AutoModel.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-pos")
```
### Extract NER Tag
```python
import torch
def predict(model, tokenizer, sentence):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(sentence.split(),
is_split_into_words = True,
return_offsets_mapping=True,
return_tensors="pt")
model.to(device)
# move to gpu
ids = inputs["input_ids"].to(device)
mask = inputs["attention_mask"].to(device)
# forward pass
outputs = model(ids, attention_mask=mask)
logits = outputs[0]
active_logits = logits.view(-1, model.num_labels) # shape (batch_size * seq_len, num_labels)
flattened_predictions = torch.argmax(active_logits, axis=1) # shape (batch_size*seq_len,) - predictions at the token level
tokens = tokenizer.convert_ids_to_tokens(ids.squeeze().tolist())
token_predictions = [model.config.id2label[i] for i in flattened_predictions.cpu().numpy()]
wp_preds = list(zip(tokens, token_predictions)) # list of tuples. Each tuple = (wordpiece, prediction)
prediction = []
for token_pred, mapping in zip(wp_preds, inputs["offset_mapping"].squeeze().tolist()):
#only predictions on first word pieces are important
if mapping[0] == 0 and mapping[1] != 0:
prediction.append(token_pred[1])
else:
continue
return sentence.split(), prediction
sentence = "BJ Habibie adalah Presiden Indonesia ke-3"
words, labels = predict(model, tokenizer, sentence)
```
|
BigSalmon/MrLincoln14
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T14:36:03Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nlp-sexism-detection
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-sexism-detection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/MrLincoln3
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17
| 2023-04-06T14:41:08Z
|
สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง
สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง เว็บไซต์เกมสล็อตยอดฮิตที่ได้รับความนิยมแล เป็นที่น่าจับตามองอย่างมากในตอนนี้ระบบทันสมัยลุ้นรับเงินล้านได้ตลอดทั้งวัน ส่งผลให้ผู้เล่นหลายท่านต่างก็เข้ามาลงทุนเล่นเกมสล็อต กับเว็บไซต์ของเราอย่างไม่หยุดหย่อน สล็อต ระบบทันสมัย ลุ้นรับเงินล้านเราคือเว็บที่ได้รับความชื่นชอบ และมีผลตอบรับดีที่สุดเนื่องจากเว็บไซต์เกมสล็อตออนไลน์ของเรานั้นเล่นง่ายได้เงินจริง เราคือเว็บไซต์เกมสล็อตที่ใหญ่และรวมทุกค่ายดังแปลกใหม่มาให้ท่านเข้าเล่นมากที่สุด สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง เรามีความโดดเด่นหลายด้านบริการทันสมัยเพื่อท่านจะได้รับความสนุกจากเกมได้อย่างไม่มีขีดจำกัด เพราะเราพร้อมพัฒนาช่องทางการเข้าเล่นให้ทันสมัยอยู่เสมอด้วยระบบอัตโนมัติ ซึ่งตอบโจทย์ผู้เล่นทุกท่านเป็นอย่างดีที่ได้เข้ามารับความสนุกได้อย่างไม่จำกัด แล้ววันนี้อัตราการจ่ายเงินรางวัลของแต่ละเกมนั้นจัดได้ว่ามี อัตราจ่ายสูงมาก และ ทุกเกมที่เรานำมา ให้ท่านนั้น สนุกได้อย่างปลอดภัย เพราะเป็นเกมสล็อตพรีเมียมทำให้ท่านเล่นสนุกลุ้นรับเงินรางวัลหลักล้านได้อย่างไม่จำกัด สล็อตเว็บตรง PG แหล่งสล็อตทำเงิน เล่นแล้วได้เงินรางวัลขนาดใหญ่ถึงใหญ่ที่สุดกลับบ้าน ถอนเงินได้จริงไม่มีจำกัดเชื่อถือได้ รับรองว่าท่านจะถูกใจ และไม่เปลี่ยนใจไปเล่นเว็บอื่นแน่นอน
สล็อตเว็บตรง PG แหล่งสล็อตทำเงิน สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ถอนได้จริง
สล็อตเว็บตรง PG แหล่งสล็อตทำเงิน สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ถอนได้จริง เว็บเกมสล็อตออนไลน์ได้รับความนิยมอย่างมากในปัจจุบัน สล็อตเว็บตรง PG แหล่งสล็อตทำเงินเราพร้อมเปิดให้บริการ เกมสล็อตหลากหลายรูปแบบ เพื่อที่ท่านจะได้สร้างกำไรอย่างมหาศาล อีกทั้งตัวเกมที่เรานำมาให้ท่านนั้น ได้รับความนิยมอย่างมาก นำเข้ามาจากทุกค่ายสล็อตดังทั่วทั้งโลก เพื่อให้ท่านได้รับความสนุกภายในเว็บเดียวเท่านั้น สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ถอนได้จริง สามารถทำเงินได้จริง ทุกเกม เพราะเกมสล็อต ของเรานั้นได้รับมาตรฐานสูงสุด นอกจากจะเป็นแหล่งสล็อตทำเงิน สล็อต ระบบทันสมัย ลุ้นรับเงินล้านยังบริการ เกมลิขสิทธิ์แท้ เพิ่มความมั่นใจให้กับทุกท่านที่เข้ามา เล่นเกมสล็อต ถอนเงินได้อย่างแน่นอน เราพร้อมจะบริการ เกมแตกหนัก เพื่อที่ท่านจะได้ สมัครสมาชิก เข้ามาสร้างรายได้อย่างไม่จำกัด จุดเด่น สล็อตเว็บตรงแตกง่าย ล้ำสมัยสุดเข้าเล่นได้ทุกช่วงเวลาทำเงินได้ทุกวัน เว็บเรารวมเกมสล็อตครบวงจรที่แตกหนักแจกจริง เพื่อให้ทุกท่านได้สนุกพร้อมรับเงินรางวัลได้อย่างไร้ขีดจำกัดใดๆ เราขอรับประกันความพึงพอใจท่านจะไม่ผิดหวัง
<img src="https://i.imgur.com/0j1g9Jw.png"></img>
จุดเด่น สล็อตเว็บตรงแตกง่าย ล้ำสมัยสุด สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน คนเล่นโคตรเยอะ
จุดเด่น สล็อตเว็บตรงแตกง่าย ล้ำสมัยสุด สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน คนเล่นโคตรเยอะ เราเปิดบริการเพื่อตอบโจทย์ผู้เล่นเกมสล็อตทุกท่าน จุดเด่น สล็อตเว็บตรงแตกง่าย ล้ำสมัยสุด เรียกได้เลยว่าเว็บเกมสล็อตของเรานั้นเป็นเว็บเกมสร้างเงินได้รับความชื่นชอบมากที่สุด เว็บไซต์เกมสล็อตเว็บตรงพร้อมมอบความต้องการให้กับผู้เล่นทุกท่านที่เข้ามาเล่นเกมสล็อตของเรา ด้วยการพัฒนาเกมสล็อตหลากหลายรูปแบบหลากหลายสไตล์ ผลิตเกมแนวใหม่ เพื่อให้ท่านเล่นสนุกได้อย่างไม่จำกัด แต่ละเกม มีความโดดเด่น และความน่าสนใจต่างกันออกไป สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน คนเล่นโคตรเยอะ แต่มีอัตราการจ่ายเงินรางวัลที่สูงมากเหมือนกัน เราขอรับประกันได้เลยว่า ทุกท่านที่เข้าเล่นจะได้รับผลตอบแทนที่คุ้มค่าที่สุดแน่นอนกับเว็บเกมสล็อตออนไลน์ของเรา และเพิ่มโอกาสให้ทุกๆ ท่านได้คว้ารางวัลสูงสุดด้วยการปรับปรุงตัวเกมให้แตกง่ายยิ่งขึ้นกว่าเดิม ท่านสามารถลุ้นทำกำไรได้ไม่ยาก อีกทั้งยัง สามารถสร้างรายได้อย่างต่อเนื่อง เว็บเกมสล็อตของเรา ริการฝากถอน ด้วยระบบอัตโนมัติ ทำรายการได้อย่างง่ายดาย รับประกันความปลอดภัยอย่างสูง มั่นคงทางการเงินไม่มีขั้นต่ำ ให้ท่านต้องกังวลใจ เลือกปั่นกับ เว็บตรง แจกของเด็ดๆ เราคือเว็บเกมสล็อตที่ทันสมัย ผู้ที่ชื่นชอบในการเล่นสล็อตต่างเข้ามาใช้บริการเว็บไซต์เรากันอย่างมหาศาล
เลือกปั่นกับ เว็บตรง แจกของเด็ดๆ สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ไม่ติดเงื่อนไข
เลือกปั่นกับ เว็บตรง แจกของเด็ดๆ สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ไม่ติดเงื่อนไข เราขอรับประกันว่าหากท่านลองเข้ามาเล่นสล็อตของเราแล้วต้องติดใจแน่นอน เลือกปั่นกับ เว็บตรง แจกของเด็ดๆ มาร่วมเป็นส่วนหนึ่งของเว็บไซต์เกมสล็อตกับเราได้แล้ววันนี้ เกมของเราเล่นง่ายได้เงินจริงสามารถทำกำไรได้ทุกการลงทุนของท่านแน่นอน เว็บไซต์ได้รับความนิยมอย่างสูงเพราะเพียงแค่ท่านตัดสินใจสมัครสมาชิกเข้ามาวันนี้ ท่านก็สามารถรับโปรโมชั่นเด็ดๆ ที่เราเตรียมไว้ให้ทุกวันได้อย่างไม่จำกัด เพื่อท่านจะได้มีเงินทุนในการหมุนเกมสล็อตมากยิ่งขึ้น สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ไม่ติดเงื่อนไข หากท่านใดมีต้นทุนน้อยยิ่งต้องรีบเข้ามาสมัครเล่นเกมสล็อตกับเว็บไซต์ของเราได้แล้ววันนี้ สนุกได้อย่างไม่จำกัดไร้ขีดจำกัด เล่นเกมสล็อตผ่านมือถือได้อย่างง่ายดาย ทุกเกมที่เรานำมาให้ท่านนั้นมีความพรีเมียมสูงมาก แจ็คพอตแตกหนัก เราพร้อมพาท่านไปรวย ได้อย่างง่ายดาย โดยใช้บริการ กับเว็บเกมสล็อต ของเราเท่านั้น เป็นเศรษฐีได้เพียงชั่วข้ามคืน ไม่จำเป็นต้อง สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง ฝากเงินเข้ามาเล่นก่อน ทุกท่านก็สามารถจะได้รับความสนุกได้อย่างไม่มีเบื่อ ตอบโจทย์นักเดิมพันยุคใหม่เป็นอย่างดีเพียงแค่เข้ามา สมัครสมาชิกกับเว็บเกมสล็อตของเรา เราพร้อมให้บริการอย่างเต็มรูปแบบแล้ววันนี้ที่นี่
บทสรุปส่งท้ายของ สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง
สล็อต ระบบทันสมัย ลุ้นรับเงินล้านเราคือเว็บที่ได้รับความชื่นชอบ และมีผลตอบรับดีที่สุด เนื่องจากเว็บไซต์เกมสล็อตออนไลน์ของเรานั้น เล่นโคตรง่ายได้เงินจริง เราคือเว็บไซต์เกมสล็อตที่ใหญ่และรวมทุกค่ายดัง ที่มีแปลกใหม่มาให้ท่านเข้าเล่นมากที่สุด สล็อตเว็บตรง PG แตกจริง ถอนเงินสดได้จริง เรามีความโดดเด่นหลายด้านบริการทันสมัยสุดๆ เพื่อท่านจะได้รับความสนุกจากเกมได้อย่างไม่มีขีดจำกัด สล็อตเว็บตรง PG แหล่งสล็อตทำเงินเราพร้อมเปิดให้บริการ เกมสล็อตหลากหลายรูปแบบ เพื่อที่ท่านจะได้สร้างกำไรอย่างมหาศาล อีกทั้งตัวเกมที่เรานำมาให้ท่านนั้น ได้รับความนิยมอย่างมาก นำเข้ามาจากทุกค่ายสล็อตดังทั่วทั้งโลก เพื่อให้ท่านได้รับความสนุกภายในเว็บเดียวเท่านั้น สล็อต ระบบทันสมัย ลุ้นรับเงินล้าน ถอนได้จริง สามารถทำเงินได้จริง ทุกเกม เพราะเกมสล็อต ของเรานั้นได้รับมาตรฐานสูงสุด สล็อตเว็บตรงแตกง่าย ล้ำสมัยสุด เรียกได้เลยว่าเว็บเกมสล็อตของเรานั้นเป็นเว็บเกมสร้างเงินได้รับความชื่นชอบมากที่สุด เว็บไซต์เกมสล็อตเว็บตรงพร้อมมอบความต้องการให้กับผู้เล่นทุกท่านที่เข้ามาเล่นเกมสล็อตของเรา ด้วยการพัฒนาเกมสล็อตหลากหลายรูปแบบหลากหลายสไตล์ผลิตเกมแนวใหม่ เพื่อให้ท่านเล่นสนุกได้อย่างไม่จำกัด เว็บตรง แจกของเด็ดๆ มาร่วมเป็นส่วนหนึ่งของเว็บไซต์เกมสล็อตกับเราได้แล้ววันนี้ เกมของเราเล่นง่ายได้เงินจริง สามารถทำกำไรได้ทุกการลงทุนของท่านแน่นอน เว็บไซต์ได้รับความนิยมอย่างสูงเพราะเพียงแค่ท่านตัดสินใจสมัครสมาชิกเข้ามาวันนี้ ท่านก็สามารถรับโปรโมชั่นเด็ดๆ ที่เราเตรียมไว้ให้ทุกวันได้อย่างไม่จำกัด เพื่อท่านจะได้มีเงินทุน ในการหมุนเกมสล็อตมากยิ่งขึ้น
|
BigSalmon/MrLincoln4
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 214.50 +/- 99.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/MrLincoln5
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| 2023-04-06T14:42:09Z
|
---
language:
- en
tags:
- openvino
---
# declare-lab/flan-alpaca-large
This is the [declare-lab/flan-alpaca-large](https://huggingface.co/declare-lab/flan-alpaca-large) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForSeq2SeqLM
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/declare-lab-flan-alpaca-large-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForSeq2SeqLM.from_pretrained(model_id)
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
result = pipe("hello world")
print(result)
```
|
BigSalmon/NEO125InformalToFormalLincoln
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigSalmon/PhraseBerta
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: JamesEJarvis/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/Points
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| 2023-04-06T14:59:27Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jennielees/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/Robertsy
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| 2023-04-06T15:03:03Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-basic
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jennielees/q-Taxi-v3-basic", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/T5F
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 6
| 2023-04-06T15:11:23Z
|
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
pipeline_tag: text-generation
tags:
- llama
- llm
---
## LLaMA-Instruct-Learning
> 针对LLaMA进行指令学习
https://github.com/yanqiangmiffy/LLaMA-Instruct-Learning
## 模型权重
| 模型名称| 基础模型 | 大小 | 模型地址 |
|-----|----------|------|----|
| LLama-7B-Alpaca | LLaMA-7B | 25GB |https://huggingface.co/quincyqiang/llama-7b-alpaca|
|
BigTooth/Megumin-v0.2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: MultiCorp_all_label_5e-05_0404_ES2_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiCorp_all_label_5e-05_0404_ES2_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1346
- Precision: 0.3090
- Recall: 0.1750
- F1: 0.2235
- Accuracy: 0.9674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.9281 | 0.08 | 25 | 0.2509 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2583 | 0.15 | 50 | 0.2399 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2319 | 0.23 | 75 | 0.2011 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.1901 | 0.31 | 100 | 0.1717 | 0.3333 | 0.0014 | 0.0028 | 0.9639 |
| 0.1894 | 0.39 | 125 | 0.1740 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.1492 | 0.46 | 150 | 0.1454 | 0.1955 | 0.0320 | 0.0550 | 0.9635 |
| 0.1504 | 0.54 | 175 | 0.1437 | 0.1288 | 0.0139 | 0.0251 | 0.9643 |
| 0.1559 | 0.62 | 200 | 0.1326 | 0.1795 | 0.1123 | 0.1382 | 0.9665 |
| 0.1571 | 0.7 | 225 | 0.1406 | 0.3095 | 0.0604 | 0.1010 | 0.9613 |
| 0.1353 | 0.77 | 250 | 0.1346 | 0.3090 | 0.1750 | 0.2235 | 0.9674 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BigeS/DialoGPT-small-Rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| 2023-04-06T15:22:49Z
|
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 197.90 +/- 63.33
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Yureeh/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
BillelBenoudjit/jplu-wikiann
|
[
"fr",
"dataset:wikiann",
"model-index"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Bilz/DialoGPT-small-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T15:23:47Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 210.74 +/- 101.24
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1984
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 10000000
'learning_rate': 0.0003
'num_envs': 4
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 10
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ankandrew/cleanrl-ppo-LunarLander-v2'
'batch_size': 4096
'minibatch_size': 1024}
```
|
Bimal/my_bot_model
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: MultiCorp_all_label_0.0001_0404_ES2_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiCorp_all_label_0.0001_0404_ES2_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.579 | 0.08 | 25 | 0.2462 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2552 | 0.15 | 50 | 0.2390 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2387 | 0.23 | 75 | 0.2371 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2319 | 0.31 | 100 | 0.2171 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2494 | 0.39 | 125 | 0.2292 | 0.0 | 0.0 | 0.0 | 0.9637 |
| 0.2018 | 0.46 | 150 | 0.2174 | 0.0 | 0.0 | 0.0 | 0.9637 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BinksSachary/DialoGPT-small-shaxx
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| 2023-04-06T15:35:08Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: MultiCorp_norm_label_2e-05_0404_ES2_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiCorp_norm_label_2e-05_0404_ES2_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0410
- Precision: 0.5775
- Recall: 0.6445
- F1: 0.6091
- Accuracy: 0.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4669 | 0.08 | 25 | 0.1415 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1519 | 0.15 | 50 | 0.1264 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1363 | 0.23 | 75 | 0.1108 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1097 | 0.31 | 100 | 0.0915 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1037 | 0.39 | 125 | 0.0883 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0746 | 0.46 | 150 | 0.0736 | 0.0 | 0.0 | 0.0 | 0.9750 |
| 0.0846 | 0.54 | 175 | 0.0683 | 0.0 | 0.0 | 0.0 | 0.9742 |
| 0.0764 | 0.62 | 200 | 0.0671 | 0.0 | 0.0 | 0.0 | 0.9751 |
| 0.0767 | 0.7 | 225 | 0.0659 | 0.64 | 0.0479 | 0.0891 | 0.9778 |
| 0.0689 | 0.77 | 250 | 0.0746 | 0.5244 | 0.1527 | 0.2365 | 0.9703 |
| 0.0718 | 0.85 | 275 | 0.0618 | 0.5739 | 0.1220 | 0.2012 | 0.9760 |
| 0.0696 | 0.93 | 300 | 0.0511 | 0.6404 | 0.2799 | 0.3896 | 0.9808 |
| 0.0633 | 1.01 | 325 | 0.0498 | 0.6383 | 0.4371 | 0.5189 | 0.9812 |
| 0.0358 | 1.08 | 350 | 0.0482 | 0.5319 | 0.5052 | 0.5182 | 0.9825 |
| 0.0575 | 1.16 | 375 | 0.0430 | 0.6702 | 0.4775 | 0.5577 | 0.9838 |
| 0.0432 | 1.24 | 400 | 0.0439 | 0.6302 | 0.5524 | 0.5888 | 0.9828 |
| 0.0415 | 1.32 | 425 | 0.0426 | 0.6299 | 0.5681 | 0.5974 | 0.9833 |
| 0.0454 | 1.39 | 450 | 0.0404 | 0.6263 | 0.5269 | 0.5724 | 0.9847 |
| 0.0421 | 1.47 | 475 | 0.0416 | 0.5990 | 0.6587 | 0.6275 | 0.9836 |
| 0.0487 | 1.55 | 500 | 0.0410 | 0.5775 | 0.6445 | 0.6091 | 0.9845 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BinksSachary/ShaxxBot2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| 2023-04-06T15:37:00Z
|
---
tags:
- alpaca
- instruction
- pythia
---
All IPythia models were trained on an internal GerbilLab high quality instruction dataset of ~75k instructions for 3 epochs. Prompt format:
```
Instruction: [instruction goes here]
Input: [input goes here]
Output: [output will be generated here]
or
Instruction: [instruction goes here]
Output: [output will be generated here]
```
|
Blazeolmo/Scrabunzi
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 468.98 +/- 105.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Blerrrry/Kkk
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T15:46:20Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: MultiCorp_norm_label_5e-05_0404_ES2_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiCorp_norm_label_5e-05_0404_ES2_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.2471 | 0.08 | 25 | 0.1373 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1494 | 0.15 | 50 | 0.1301 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1363 | 0.23 | 75 | 0.1163 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.113 | 0.31 | 100 | 0.0953 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1122 | 0.39 | 125 | 0.0958 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0901 | 0.46 | 150 | 0.0851 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0935 | 0.54 | 175 | 0.0772 | 0.0 | 0.0 | 0.0 | 0.9755 |
| 0.0933 | 0.62 | 200 | 0.0738 | 0.0 | 0.0 | 0.0 | 0.9770 |
| 0.0849 | 0.7 | 225 | 0.0871 | 0.0 | 0.0 | 0.0 | 0.9708 |
| 0.0818 | 0.77 | 250 | 0.0866 | 0.0 | 0.0 | 0.0 | 0.9717 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BlightZz/DialoGPT-medium-Kurisu
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19
| null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DialoGPT-large-finetuned-mc-uk-parsed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DialoGPT-large-finetuned-mc-uk-parsed
This model is a fine-tuned version of [microsoft/DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.3332 | 1.0 | 391306 | 2.3034 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.11.0+cu113
- Datasets 2.11.0
- Tokenizers 0.13.2
|
BlueGamerBeast/DialoGPT-small-joshua
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T15:52:16Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: MultiCorp_norm_label_0.0001_0404_ES2_strict_tok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiCorp_norm_label_0.0001_0404_ES2_strict_tok
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.4116
- Recall: 0.5509
- F1: 0.4712
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2418 | 0.08 | 25 | 0.1319 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.1283 | 0.15 | 50 | 0.1029 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0978 | 0.23 | 75 | 0.0817 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0861 | 0.31 | 100 | 0.0793 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0887 | 0.39 | 125 | 0.0727 | 0.0 | 0.0 | 0.0 | 0.9734 |
| 0.0621 | 0.46 | 150 | 0.0624 | 0.3772 | 0.0322 | 0.0593 | 0.9752 |
| 0.0657 | 0.54 | 175 | 0.0578 | 0.7010 | 0.0509 | 0.0949 | 0.9771 |
| 0.0741 | 0.62 | 200 | 0.0521 | 0.4662 | 0.1804 | 0.2601 | 0.9795 |
| 0.0609 | 0.7 | 225 | 0.0559 | 0.5162 | 0.5487 | 0.5319 | 0.9755 |
| 0.056 | 0.77 | 250 | 0.0591 | 0.4116 | 0.5509 | 0.4712 | 0.9724 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BobBraico/bert-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T15:54:30Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.05 +/- 13.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-en2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7646
- Wer: 200.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-----:|
| 0.0 | 1000.0 | 1000 | 1.9557 | 200.0 |
| 0.0 | 2000.0 | 2000 | 1.7736 | 200.0 |
| 0.0 | 3000.0 | 3000 | 1.7677 | 200.0 |
| 0.0 | 4000.0 | 4000 | 1.7646 | 200.0 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.11.0
|
BobBraico/distilbert-base-uncased-finetuned-imdb
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T15:57:03Z
|
---
language: is
datasets:
- language-and-voice-lab/samromur_asr
- language-and-voice-lab/samromur_children
- language-and-voice-lab/malromur_asr
- language-and-voice-lab/althingi_asr
tags:
- audio
- automatic-speech-recognition
- icelandic
- whisper
- whisper-large
- iceland
- reykjavik
- samromur
license: cc-by-4.0
widget:
model-index:
- name: whisper-large-icelandic-30k-steps-1000h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Test)
type: language-and-voice-lab/samromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 8.479
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Dev)
type: language-and-voice-lab/samromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 7.299
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Test)
type: language-and-voice-lab/samromur_children
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 7.743
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Dev)
type: language-and-voice-lab/samromur_children
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 4.591
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Test)
type: language-and-voice-lab/malromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 5.110
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Dev)
type: language-and-voice-lab/malromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 5.286
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Test)
type: language-and-voice-lab/althingi_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 8.250
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Dev)
type: language-and-voice-lab/althingi_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 7.998
---
# whisper-large-icelandic-30k-steps-1000h
The "whisper-large-icelandic-30k-steps-1000h" is an acoustic model suitable for Automatic Speech Recognition in Icelandic. It is the result of fine-tuning the model "openai/whisper-large" for 30,000 steps with around 1000 hours of Icelandic data developed by the [Language and Voice Laboratory](https://huggingface.co/language-and-voice-lab). Most of the data is available at public repositories such as [LDC](https://www.ldc.upenn.edu/), [OpenSLR](https://openslr.org/) or [Clarin.is](https://clarin.is/)
The specific list of corpora used to fine-tune the model is:
- [Samrómur 21.05 (114h34m)](http://www.openslr.org/112/)
- [Samrómur Children (127h25m)](https://catalog.ldc.upenn.edu/LDC2022S11)
- [Malrómur (119hh03m)](https://clarin.is/en/resources/malromur/)
- [Althingi Parliamentary Speech (514h29m)](https://catalog.ldc.upenn.edu/LDC2021S01)
- L2-Speakers Data (125h55m) **Unpublished material**
The fine-tuning process was performed during April (2023) in the servers of the Language and Voice Laboratory (https://lvl.ru.is/) at Reykjavík University (Iceland) by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena).
# Evaluation
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
#Load the processor and model.
MODEL_NAME="language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda")
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("language-and-voice-lab/samromur_children",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def map_to_pred(batch):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
batch["reference"] = processor.tokenizer._normalize(batch['normalized_text'])
with torch.no_grad():
predicted_ids = model.generate(input_features.to("cuda"))[0]
transcription = processor.decode(predicted_ids)
batch["prediction"] = processor.tokenizer._normalize(transcription)
return batch
#Do the evaluation
result = ds.map(map_to_pred)
#Compute the overall WER now.
from evaluate import load
wer = load("wer")
WER=100 * wer.compute(references=result["reference"], predictions=result["prediction"])
print(WER)
```
**Test Result**: 7.743795695602924
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2023whisperlarge30kicelandic,
title={Acoustic Model in Icelandic: whisper-large-icelandic-30k-steps-1000h.},
author={Hernandez Mena, Carlos Daniel},
year={2023},
url={https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h},
}
```
# Acknowledgements
Thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible.
We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture. This model is an unexpected result of all the resources gathered by the Programme.
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
|
BogdanKuloren/continual-learning-paper-embeddings-model
|
[
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"MPNetModel"
],
"model_type": "mpnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
language: de
inference: false
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
- peft
- lora
license: apache-2.0
model-index:
- name: whisper-large-german-lora-cv13 by Florian Zimmermeister @A\\\\Ware
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: google/fleurs
args: de_de
metrics:
- name: Test WER
type: wer
value: 2.9811177839095384
- name: Test CER
type: cer
value: 1.4070675486699245
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
|
BonjinKim/dst_kor_bert
|
[
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null |
{
"architectures": [
"BertForPreTraining"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-06T16:08:32Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1935
- Exact Match: 57.2183
- F1: 71.7072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.3204 | 0.5 | 19 | 3.6469 | 10.9155 | 20.4300 |
| 6.3204 | 0.99 | 38 | 2.7834 | 17.9577 | 28.8829 |
| 3.5802 | 1.5 | 57 | 2.3114 | 24.2958 | 36.4160 |
| 3.5802 | 1.99 | 76 | 2.0209 | 29.4014 | 42.5434 |
| 3.5802 | 2.5 | 95 | 1.7380 | 38.3803 | 51.5950 |
| 2.0482 | 2.99 | 114 | 1.4687 | 44.8944 | 59.1567 |
| 2.0482 | 3.5 | 133 | 1.3680 | 50.0 | 64.4849 |
| 1.3956 | 3.99 | 152 | 1.2840 | 50.5282 | 65.7446 |
| 1.3956 | 4.5 | 171 | 1.2633 | 52.6408 | 67.0356 |
| 1.3956 | 4.99 | 190 | 1.2035 | 53.5211 | 68.4126 |
| 1.0901 | 5.5 | 209 | 1.2142 | 54.5775 | 69.1038 |
| 1.0901 | 5.99 | 228 | 1.1843 | 55.6338 | 69.8223 |
| 1.0901 | 6.5 | 247 | 1.1881 | 56.6901 | 70.7746 |
| 0.9217 | 6.99 | 266 | 1.1898 | 56.1620 | 70.2471 |
| 0.9217 | 7.5 | 285 | 1.1882 | 56.5141 | 70.7193 |
| 0.8307 | 7.99 | 304 | 1.2073 | 56.8662 | 71.6134 |
| 0.8307 | 8.5 | 323 | 1.1930 | 57.0423 | 71.3981 |
| 0.8307 | 8.99 | 342 | 1.1980 | 57.0423 | 71.8225 |
| 0.7811 | 9.5 | 361 | 1.1940 | 57.2183 | 71.7072 |
| 0.7811 | 9.99 | 380 | 1.1935 | 57.2183 | 71.7072 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
BossLee/t5-gec
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 6
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Raiden-1001/poca-Soccerv6
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Botjallu/DialoGPT-small-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T16:23:51Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-Squad-ID-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-Squad-ID-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3876
- Exact Match: 53.6102
- F1: 69.6077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 1.5313 | 0.5 | 463 | 1.4235 | 48.7014 | 66.1658 |
| 1.3868 | 1.0 | 926 | 1.3193 | 51.7189 | 68.5896 |
| 1.2618 | 1.5 | 1389 | 1.2877 | 52.8032 | 69.3561 |
| 1.1847 | 2.0 | 1852 | 1.2893 | 53.0218 | 69.7724 |
| 1.0884 | 2.5 | 2315 | 1.2777 | 53.3328 | 69.8210 |
| 1.0927 | 3.0 | 2778 | 1.2596 | 53.4000 | 69.9664 |
| 0.9519 | 3.5 | 3241 | 1.3342 | 53.6102 | 69.6168 |
| 0.9591 | 4.0 | 3704 | 1.3078 | 54.0640 | 69.9492 |
| 0.8586 | 4.49 | 4167 | 1.3876 | 53.6102 | 69.6077 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Botslity/Bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
# alpaca-opt-6.7b
Source Code: www.github.com/Manuel030/alpaca-opt
Training and inference code for [OPT](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/) models based on the [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) training protocol. You can get the trained model
from the huggingface hub:
```python
from transfomers import OPTForCausalLM
model = OPTForCausalLM.from_pretrained(
"Manuel030/alpaca-opt-6.7b",
)
tokenizer = AutoTokenizer.from_pretrained("Manuel030/alpaca-opt-6.7b")
```
### Why?
Most base model are based on Llama and hence do not have a permissive license.
### Outputs
```
Instruction: Tell me about alpacas.
Response: Alpacas are a species of large South American camelids. They are characterized by their long, coarse wool and their unique coat patterning. Alpacas are commonly used for their soft, warm wool, which is used in a variety of products, including clothing, blankets, and yarn. Alpacas are also used for their fiber, which is used in the production of rope and twine. Alpacas are also known for their friendly and docile nature. They are popular companions for people who enjoy the outdoors, and are often kept as pets or used as herd animals. Alpacas are also known for their unique personality traits, including their unique vocalizations and their unique way of walking. Alpacas are native to South America and are found in Argentina, Bolivia, Chile, Colombia, Ecuador, Peru, and Uruguay. They are also commonly found in other parts of the world, such as in Australia, New Zealand, and Europe. Alpacas are a popular livestock species, and are often used to help raise llamas and other types of livestock.
```
```
Instruction: Tell me about the president of Mexico in 2019.
Response: The president of Mexico in 2019 is Andres Manuel Lopez Obrador (AMLO), who took office on December 1, 2018. AMLO is a leftist politician and former mayor of Mexico City. He is known for his populist platform and his opposition to the country's political establishment. He has promised to bring about a more equal and just society and to address the problems of corruption and inequality. AMLO has also promised to address the country's problems of crime, violence, and poverty. He has also been critical of the United States' immigration policy and has pledged to protect the rights of migrants and asylum seekers. He has also promised to fight corruption and to reduce the country's dependence on foreign oil.
```
### Credits
- [tloen/alpaca-lora](https://github.com/tloen/alpaca-lora)
|
Branex/gpt-neo-2.7B
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-IDK-MRC-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8673
- Exact Match: 74.0838
- F1: 81.0390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.2177 | 0.49 | 36 | 2.3043 | 45.2880 | 46.1924 |
| 3.4831 | 0.98 | 72 | 1.5333 | 51.3089 | 56.5227 |
| 1.6897 | 1.48 | 108 | 1.1604 | 60.2094 | 68.3733 |
| 1.6897 | 1.97 | 144 | 0.9852 | 65.3141 | 72.9935 |
| 1.1108 | 2.46 | 180 | 0.9487 | 65.4450 | 72.8064 |
| 0.8854 | 2.95 | 216 | 0.8634 | 68.0628 | 75.1967 |
| 0.7269 | 3.45 | 252 | 0.9271 | 69.7644 | 76.9429 |
| 0.7269 | 3.94 | 288 | 0.9044 | 69.3717 | 76.4864 |
| 0.648 | 4.44 | 324 | 0.8352 | 73.1675 | 79.8410 |
| 0.5446 | 4.92 | 360 | 0.8074 | 74.7382 | 81.2181 |
| 0.5446 | 5.42 | 396 | 0.8726 | 73.4293 | 80.5400 |
| 0.497 | 5.91 | 432 | 0.8598 | 73.6911 | 80.8239 |
| 0.4647 | 6.41 | 468 | 0.8673 | 74.0838 | 81.0390 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Broadus20/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9166666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 0.5572 | 0.7917 |
| No log | 2.0 | 7 | 0.4659 | 0.75 |
| 0.4694 | 2.86 | 10 | 0.4492 | 0.7917 |
| 0.4694 | 4.0 | 14 | 0.2875 | 0.875 |
| 0.4694 | 4.86 | 17 | 0.2463 | 0.875 |
| 0.3403 | 6.0 | 21 | 0.2235 | 0.9167 |
| 0.3403 | 6.86 | 24 | 0.2371 | 0.9167 |
| 0.3403 | 8.0 | 28 | 0.1865 | 0.9167 |
| 0.2581 | 8.86 | 31 | 0.3179 | 0.8333 |
| 0.2581 | 10.0 | 35 | 0.2050 | 0.8333 |
| 0.2581 | 10.86 | 38 | 0.2885 | 0.8333 |
| 0.192 | 12.0 | 42 | 0.2371 | 0.7917 |
| 0.192 | 12.86 | 45 | 0.1783 | 0.875 |
| 0.192 | 14.0 | 49 | 0.1164 | 0.9167 |
| 0.1479 | 14.86 | 52 | 0.1250 | 0.9167 |
| 0.1479 | 16.0 | 56 | 0.1491 | 0.875 |
| 0.1479 | 16.86 | 59 | 0.1409 | 0.875 |
| 0.1348 | 18.0 | 63 | 0.1192 | 0.9167 |
| 0.1348 | 18.86 | 66 | 0.1168 | 0.9167 |
| 0.1461 | 20.0 | 70 | 0.1106 | 0.9167 |
| 0.1461 | 20.57 | 72 | 0.1101 | 0.9167 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Broadus20/DialoGPT-small-joshua
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| 2023-04-06T16:38:08Z
|
---
license: mit
tags:
- image-classification
- tfjs
---
## TensorFlow.js version of Mobilenet
Pushed from Web

|
Brona/poc_de
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TariqJamil/vit-base-patch16-224-in21k-euroSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TariqJamil/vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9975
- Train Accuracy: 1.0
- Train Top-3-accuracy: 1.0
- Validation Loss: 0.8525
- Validation Accuracy: 1.0
- Validation Top-3-accuracy: 1.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 25, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.4175 | 0.7397 | 0.9315 | 1.1650 | 1.0 | 1.0 | 0 |
| 0.9975 | 1.0 | 1.0 | 0.8525 | 1.0 | 1.0 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Brunomezenga/NN
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
# My Toxicity Debiaser Pipeline
This custom pipeline debiases toxic text using a toxicity classifier and GPT-2.
## Usage
To use this pipeline, you first need to download the required models and tokenizers, and then import the `MyToxicityDebiaserPipeline` class:
```python
!git lfs install
!git clone https://huggingface.co/shainaraza/toxicity_debias_pipeline
%cd /toxicity_debias_pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification, GPT2LMHeadModel, GPT2Tokenizer
from my_toxicity_debiaser import MyToxicityDebiaserPipeline
toxicity_model_name = "shainaraza/toxity_classify_debiaser"
gpt_model_name = "gpt2"
toxicity_tokenizer = AutoTokenizer.from_pretrained(toxicity_model_name)
toxicity_model = AutoModelForSequenceClassification.from_pretrained(toxicity_model_name)
gpt_tokenizer = GPT2Tokenizer.from_pretrained(gpt_model_name)
gpt_model = GPT2LMHeadModel.from_pretrained(gpt_model_name)
pipeline = MyToxicityDebiaserPipeline(
model=toxicity_model,
tokenizer=toxicity_tokenizer,
gpt_model=gpt_model,
gpt_tokenizer=gpt_tokenizer,
)
text = "Your example text here"
result = pipeline(text)
print(result)
```
## Tips
Here are some tips for tuning the GPT2 model to improve the quality of its generated prompts:
-max_length: This parameter controls the maximum length of the generated prompt. You can experiment with different values to find the best length that suits your needs. A longer length may result in more context, but it may also make the prompt less coherent.
-top_p: This parameter controls the diversity of the generated prompt. A lower value of top_p will generate more conservative and predictable prompts, while a higher value will generate more diverse and creative prompts. You can experiment with different values to find the right balance.
-temperature: This parameter controls the randomness of the generated prompt. A lower value of temperature will generate more conservative and predictable prompts, while a higher value will generate more diverse and creative prompts. You can experiment with different values to find the right balance.
As for the prompt, you can try different prompts to see which one works better for your specific use case. You can also try pre-processing the input text to remove any bias or offensive language before passing it to the GPT2 model. Additionally, you may want to consider fine-tuning the GPT2 model on your specific task to improve its performance.
|
Bryan190/Aguy190
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T16:43:51Z
|
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-japanese-sentiment
results: []
language:
- ja
pipeline_tag: text-classification
metrics:
- accuracy
---
# bert-finetuned-japanese-sentiment
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) on product amazon reviews japanese dataset.
## Model description
Model Train for amazon reviews Japanese sentence sentiments.
Sentiment analysis is a common task in natural language processing. It consists of classifying the polarity of a given text at the sentence or document level. For instance, the sentence "The food is good" has a positive sentiment, while the sentence "The food is bad" has a negative sentiment.
In this model, we fine-tuned a BERT model on a Japanese sentiment analysis dataset. The dataset contains 20,000 sentences extracted from Amazon reviews. Each sentence is labeled as positive, neutral, or negative. The model was trained for 5 epochs with a batch size of 16.
## Training and evaluation data
- Epochs: 6
- Training Loss: 0.087600
- Validation Loss: 1.028876
- Accuracy: 0.813202
- Precision: 0.712440
- Recall: 0.756031
- F1: 0.728455
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.2
|
Bryanwong/wangchanberta-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BumBelDumBel/TRUMP
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-06T17:03:08Z
|
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: perioli_vgm_v4.2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
config: discharge
split: test
args: discharge
metrics:
- name: Precision
type: precision
value: 0.8813559322033898
- name: Recall
type: recall
value: 0.859504132231405
- name: F1
type: f1
value: 0.8702928870292886
- name: Accuracy
type: accuracy
value: 0.9955660656222288
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# perioli_vgm_v4.2
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0247
- Precision: 0.8814
- Recall: 0.8595
- F1: 0.8703
- Accuracy: 0.9956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.84 | 100 | 0.1047 | 0.4194 | 0.2149 | 0.2842 | 0.9764 |
| No log | 1.68 | 200 | 0.0555 | 0.5441 | 0.6116 | 0.5759 | 0.9843 |
| No log | 2.52 | 300 | 0.0445 | 0.5899 | 0.6777 | 0.6308 | 0.9879 |
| No log | 3.36 | 400 | 0.0288 | 0.7402 | 0.7769 | 0.7581 | 0.9929 |
| 0.0777 | 4.2 | 500 | 0.0292 | 0.8033 | 0.8099 | 0.8066 | 0.9938 |
| 0.0777 | 5.04 | 600 | 0.0172 | 0.8321 | 0.9008 | 0.8651 | 0.9962 |
| 0.0777 | 5.88 | 700 | 0.0321 | 0.8067 | 0.7934 | 0.8 | 0.9932 |
| 0.0777 | 6.72 | 800 | 0.0165 | 0.8862 | 0.9008 | 0.8934 | 0.9967 |
| 0.0777 | 7.56 | 900 | 0.0318 | 0.8644 | 0.8430 | 0.8536 | 0.9953 |
| 0.0093 | 8.4 | 1000 | 0.0247 | 0.8814 | 0.8595 | 0.8703 | 0.9956 |
| 0.0093 | 9.24 | 1100 | 0.0220 | 0.8678 | 0.8678 | 0.8678 | 0.9962 |
| 0.0093 | 10.08 | 1200 | 0.0183 | 0.8607 | 0.8678 | 0.8642 | 0.9965 |
| 0.0093 | 10.92 | 1300 | 0.0269 | 0.8739 | 0.8595 | 0.8667 | 0.9959 |
| 0.0093 | 11.76 | 1400 | 0.0171 | 0.8387 | 0.8595 | 0.8490 | 0.9965 |
| 0.0035 | 12.61 | 1500 | 0.0201 | 0.8607 | 0.8678 | 0.8642 | 0.9965 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.2.2
- Tokenizers 0.13.3
|
BumBelDumBel/ZORK-AI-TEST
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| 2023-04-06T17:03:12Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.33 +/- 34.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CALM/CALM
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: bigscience-openrail-m
---
A girl who works at the apple computer
In the Lviv city administration
The prozoro website is on the monitor
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 85
| 2023-04-06T17:30:07Z
|
---
license: mit
tags:
- personal data
- privacy
- legal
- infosec
- security
- vulnerabilities
- compliance
- text generation
model-index:
- name: GPT-PDVS1-None
results: []
language:
- en
pipeline_tag: text-generation
widget:
- text: "Doreen Ball was born in the year"
example_title: "Year of birth"
- text: "Tanya Lyons lives at "
example_title: "Address"
---
# GPT-PDVS1-None
<img style="float:right; margin:10px; margin-right:30px" src="https://huggingface.co/NeuraXenetica/GPT-PDVS1-None/resolve/main/GPT-PDVS_logo_03s.png" width="150" height="150"></img>
**GPT-PDVS1-None** is an experimental open-source text-generating AI designed for testing vulnerabilities in GPT-type models relating to the gathering, retention, and possible later dissemination (whether in accurate or distorted form) of individuals’ personal data.
GPT-PDVS1-None is the member of the larger “GPT Personal Data Vulnerability Simulator” (GPT-PDVS) model family that has been fine-tuned on a text corpus to which no personal data sentences have been added. Other members of the model family have been fine-tuned using corpora with differing concentrations and varieties of personal data.
## Model description
The model is a fine-tuned version of GPT-2 that has been trained on a text corpus containing 18,000 paragraphs from pages in the English-language version of Wikipedia, randomly selected from the “[Quoref (Q&A for Coreference Resolution)](https://www.kaggle.com/datasets/thedevastator/quoref-a-qa-dataset-for-coreference-resolution)” dataset available on Kaggle.com.
## Intended uses & limitations
This model has been designed for experimental research purposes; it isn’t intended for use in a production setting or in any sensitive or potentially hazardous contexts.
## Training procedure and hyperparameters
The model was fine-tuned using a Tesla T4 with 16GB of GPU memory. The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
- epochs: 8
### Framework versions
- Transformers 4.27.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16,451
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-synthesized-turkish-8-hour
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2300
- Wer: 23.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.2682 | 0.52 | 100 | 0.5845 | 99.7901 |
| 0.4591 | 1.04 | 200 | 0.3895 | 21.4541 |
| 0.2482 | 1.56 | 300 | 0.2241 | 12.2145 |
| 0.1554 | 2.08 | 400 | 0.2092 | 11.7825 |
| 0.096 | 2.6 | 500 | 0.2035 | 13.9057 |
| 0.0765 | 3.12 | 600 | 0.2052 | 11.2517 |
| 0.0424 | 3.65 | 700 | 0.2024 | 13.4490 |
| 0.0403 | 4.17 | 800 | 0.2094 | 12.0849 |
| 0.0216 | 4.69 | 900 | 0.2049 | 13.1959 |
| 0.0201 | 5.21 | 1000 | 0.2079 | 12.1034 |
| 0.0101 | 5.73 | 1100 | 0.2073 | 12.5663 |
| 0.0131 | 6.25 | 1200 | 0.2093 | 16.7757 |
| 0.0088 | 6.77 | 1300 | 0.2121 | 16.5165 |
| 0.0073 | 7.29 | 1400 | 0.2142 | 15.3314 |
| 0.0036 | 7.81 | 1500 | 0.2183 | 13.7020 |
| 0.0047 | 8.33 | 1600 | 0.2159 | 16.1647 |
| 0.0038 | 8.85 | 1700 | 0.2166 | 13.7514 |
| 0.0027 | 9.38 | 1800 | 0.2172 | 19.9975 |
| 0.0028 | 9.9 | 1900 | 0.2183 | 18.2385 |
| 0.0015 | 10.42 | 2000 | 0.2196 | 17.4238 |
| 0.0023 | 10.94 | 2100 | 0.2192 | 14.7019 |
| 0.0012 | 11.46 | 2200 | 0.2216 | 15.9919 |
| 0.0017 | 11.98 | 2300 | 0.2215 | 19.6334 |
| 0.001 | 12.5 | 2400 | 0.2219 | 20.5160 |
| 0.0014 | 13.02 | 2500 | 0.2236 | 21.7813 |
| 0.0011 | 13.54 | 2600 | 0.2242 | 23.0897 |
| 0.0009 | 14.06 | 2700 | 0.2276 | 25.0401 |
| 0.001 | 14.58 | 2800 | 0.2269 | 18.7014 |
| 0.001 | 15.1 | 2900 | 0.2265 | 20.8554 |
| 0.0008 | 15.62 | 3000 | 0.2272 | 19.7013 |
| 0.0009 | 16.15 | 3100 | 0.2277 | 26.5831 |
| 0.0007 | 16.67 | 3200 | 0.2290 | 24.3427 |
| 0.0008 | 17.19 | 3300 | 0.2285 | 20.7011 |
| 0.0007 | 17.71 | 3400 | 0.2288 | 21.8738 |
| 0.0007 | 18.23 | 3500 | 0.2290 | 20.7258 |
| 0.0006 | 18.75 | 3600 | 0.2295 | 21.1641 |
| 0.0006 | 19.27 | 3700 | 0.2297 | 23.7625 |
| 0.0007 | 19.79 | 3800 | 0.2301 | 24.4044 |
| 0.0006 | 20.31 | 3900 | 0.2299 | 22.9786 |
| 0.0006 | 20.83 | 4000 | 0.2300 | 23.0527 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 71
| null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ria14313/distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ria14313/distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1595
- Validation Loss: 1.9001
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1595 | 1.9001 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"has_space"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19,850
| null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2084.14 +/- 53.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 132
| 2023-04-06T18:10:04Z
|
---
metrics: null
---
Quantized Meta AI's [LLaMA](https://arxiv.org/abs/2302.13971) in 4bit with the help of [GPTQ](https://arxiv.org/abs/2210.17323v2) algorithm v2.
GPTQ implementation - https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/49efe0b67db4b40eac2ae963819ebc055da64074
Conversion process
```sh
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-7b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors ./q4/llama7b-4bit-ts-ao-g128-v2.safetensors
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 855
| null |
---
license: mit
language:
- en
metrics:
- r_squared
library_name: keras
---
# Solar Transformer
Please check our paper [Solar Irradiance Forecasting with Transformer model
](https://www.mdpi.com/2076-3417/12/17/8852) for more details.
## Paper
* Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Advances in neural information processing systems 2017, 30.
* Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J. An image is worth 16x16 words: Transformers for image recognition at scale. 2020, arXiv preprint arXiv:2010.11929.
* Bao, H.; Dong, L.; Wei, F. Beit: Bert pre-training of image transformers. 2021, arXiv preprint arXiv:2106.08254.
* Brahma, B.; Wadhvani, R. Solar irradiance forecasting based on deep learning methodologies and multi-site data. Sym-metry 2020, 12(11), p.1830. Available online: https://www.mdpi.com/2073-8994/12/11/1830
## About
Solar energy is one of the most popular sources of renewable energy today. It is therefore essential to be able to predict solar power generation and adapt the energy needs to these predictions. This paper uses Transformer deep neural network model, in which the attention mechanism is typically applied in NLP or vision problems. Here it is extended by combining features based on their spatio-temporal properties in solar irradiance prediction. The results were predicted for arbitrary long-time horizons since the prediction is always 1 day ahead, which can be included at the end along the timestep axis of the input data and the first timestep representing the oldest timestep removed. A maximum worst-case mean absolute percentage error of 3.45% for the 1 day-ahead prediction was achieved, thus providing better results than the directly competing method.
## Dataset
[NASA POWER Project](https://power.larc.nasa.gov)
Solar irradiance + Weather (temperature, humidity, pressure, wind speed, wind direction)
----------------------------------
**Frameworks:** TensorFlow, NumPy, Pandas, WanDB, Seaborn, Matplotlib
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21
| null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet- jax-diffusers-event/canny-coyo1m
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
prompt: car, a detailed high-quality professional image

prompt: A house on the water with a small yacht out front, a detailed high-quality professional image

prompt: man with polo shirt, a detailed high-quality professional image

prompt: sneaker, a detailed high-quality professional image

|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 229
| null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -182.46 +/- 110.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Addwater/LunarLander-v2-PPO'
'batch_size': 512
'minibatch_size': 128}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574
| null |
Access to model openjorge/Daniela is restricted and you are not in the authorized list. Visit https://huggingface.co/openjorge/Daniela to ask for access.
|
CL/safe-math-bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T18:47:30Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: dungtd2403/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CLAck/en-vi
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| 2023-04-06T18:50:28Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 32 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10167 with parameters:
```
{'batch_size': 512, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5083,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 384, 'out_features': 32, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
CLAck/indo-mixed
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15
| 2023-04-06T18:58:22Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.64 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLAck/indo-pure
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4
| 2023-04-06T19:01:25Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.83 +/- 0.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/gm-ner-xlmrbase
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| 2023-04-06T19:16:38Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: dungtd2403/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CLTL/icf-domains
|
[
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
] |
text-classification
|
{
"architectures": [
"RobertaForMultiLabelSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### /-grumpy_SD_Version2-1 Dreambooth model trained by Tinsae with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CLTL/icf-levels-adm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33
| null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.47 +/- 4.61
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gian-cr/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CLTL/icf-levels-att
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
This is a quantized ggml model made for llama cpp. All rights belongs to (https://github.com/facebookresearch/llama)
|
CLTL/icf-levels-enr
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| 2023-04-06T19:27:52Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: bsenst/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CLTL/icf-levels-fac
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.6426
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6012
- Bleu: 5.6426
- Gen Len: 17.5697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8673 | 1.0 | 6355 | 1.6258 | 5.4687 | 17.5807 |
| 1.8234 | 2.0 | 12710 | 1.6012 | 5.6426 | 17.5697 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
CLTL/icf-levels-mbw
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| null |
---
license: openrail
metrics:
- accuracy
- character
library_name: keras
pipeline_tag: conversational
tags:
- legal
---
|
CM-CA/Cartman
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Access to model Martin713/tweed-mini-dress is restricted and you are not in the authorized list. Visit https://huggingface.co/Martin713/tweed-mini-dress to ask for access.
|
CTBC/ATS
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-06T20:00:17Z
|
---
license: creativeml-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
tags:
- art
- code
- finance
- music
- text-generation-inference
- fashion
---
|
Cameron/BERT-jigsaw-severetoxic
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.40 +/- 28.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Champion/test_upload_vox2_wavlm_epoch8
|
[
"sidekit",
"audio"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ClothingAI Dreambooth model trained by lenssssw with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CoffeeAddict93/gpt2-modest-proposal
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### vanitypoc-v2 Dreambooth model trained by freemindcore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cometrain/neurotitle-rugpt3-small
|
[
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20
| null |
I tried making groupsize 16 but that did not end well so I went with 32g. FYI I can run this with full context on my A6000.
```
65B (act-order true-sequential groupsize)
wikitext2 3.5319948196411133 (stock 16bit)
wikitext2 3.610668182373047 (32g)
wikitext2 3.650667667388916 (16g)
wikitext2 3.6660284996032715 (128)
ptb-new 7.66942024230957 (stock 16bit)
ptb-new 7.71506929397583 (32g)
ptb-new 7.762592792510986 (128)
ptb-new 7.829207897186279 (16g)
c4-new 5.8114824295043945 (stock 16bit)
c4-new 5.859227657318115 (32g)
c4-new 5.893154144287109 (128)
c4-new 5.929086208343506 (16g)
```
|
Connorvr/BrightBot-small
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
```
30B (act-order true-sequential groupsize)
wikitext2 4.100694179534912 (stock 16bit)
wikitext2 4.179347991943359 (32g)
wikitext2 4.222894191741943 (128g)
ptb-new 8.13940715789795 (stock 16bit)
ptb-new 8.201859474182129 (32g)
ptb-new 8.227158546447754 (128g)
c4-new 6.129664421081543 (stock 16bit)
c4-new 6.190909385681152 (32g)
c4-new 6.230474948883057 (128g)
```
|
Cool/Demo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_11_aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_11_aug
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Mse: 0.2285
- Mae: 0.3670
- R2: 0.4927
- Accuracy: 0.7067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| No log | 1.0 | 263 | 0.2010 | 0.2010 | 0.3575 | 0.5311 | 0.7367 |
| 0.2435 | 2.0 | 526 | 0.1490 | 0.1490 | 0.2495 | 0.6523 | 0.8733 |
| 0.2435 | 3.0 | 789 | 0.0972 | 0.0972 | 0.2068 | 0.7732 | 0.9067 |
| 0.0906 | 4.0 | 1052 | 0.1115 | 0.1115 | 0.2082 | 0.7399 | 0.9067 |
| 0.0906 | 5.0 | 1315 | 0.0904 | 0.0904 | 0.1684 | 0.7890 | 0.9 |
| 0.0421 | 6.0 | 1578 | 0.0791 | 0.0791 | 0.1542 | 0.8153 | 0.93 |
| 0.0421 | 7.0 | 1841 | 0.0843 | 0.0843 | 0.1415 | 0.8034 | 0.9133 |
| 0.0274 | 8.0 | 2104 | 0.0694 | 0.0694 | 0.1152 | 0.8380 | 0.9333 |
| 0.0274 | 9.0 | 2367 | 0.0742 | 0.0742 | 0.1435 | 0.8269 | 0.93 |
| 0.0213 | 10.0 | 2630 | 0.0659 | 0.0659 | 0.1022 | 0.8463 | 0.9367 |
| 0.0213 | 11.0 | 2893 | 0.0600 | 0.0600 | 0.1054 | 0.8599 | 0.9433 |
| 0.0127 | 12.0 | 3156 | 0.0540 | 0.0540 | 0.0988 | 0.8739 | 0.9433 |
| 0.0127 | 13.0 | 3419 | 0.0479 | 0.0479 | 0.0854 | 0.8883 | 0.9567 |
| 0.0077 | 14.0 | 3682 | 0.0517 | 0.0517 | 0.0848 | 0.8793 | 0.95 |
| 0.0077 | 15.0 | 3945 | 0.0405 | 0.0405 | 0.0851 | 0.9054 | 0.9633 |
| 0.0051 | 16.0 | 4208 | 0.0430 | 0.0430 | 0.0742 | 0.8996 | 0.9533 |
| 0.0051 | 17.0 | 4471 | 0.0368 | 0.0368 | 0.0721 | 0.9142 | 0.96 |
| 0.0036 | 18.0 | 4734 | 0.0352 | 0.0352 | 0.0709 | 0.9180 | 0.96 |
| 0.0036 | 19.0 | 4997 | 0.0345 | 0.0345 | 0.0654 | 0.9195 | 0.9567 |
| 0.0029 | 20.0 | 5260 | 0.0366 | 0.0366 | 0.0671 | 0.9146 | 0.96 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Coolhand/Abuela
|
[
"en",
"image_restoration",
"superresolution",
"license:mit"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 209.50 +/- 25.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Coyotl/DialoGPT-test-last-arthurmorgan
|
[
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Temporary Redirect. Redirecting to /coreml/coreml-dreamshaper-4-and-5/resolve/main/README.md
|
CrypticT1tan/DialoGPT-medium-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="justinsiow/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Culmenus/opus-mt-de-is-finetuned-de-to-is
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.43 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="justinsiow/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.19 +/- 20.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Aggretriever is an encoder to aggregate both lexical and semantic text information into a single-vector dense vector for dense retrieval, which is finetued on MS MARCO corpus with BM25 negative sampling, following the approach described in [Aggretriever: A Simple Approach to Aggregate Textual Representation for Robust Dense Passage Retrieval](https://arxiv.org/abs/2208.00511).
<p align="center">
<img src="https://raw.githubusercontent.com/castorini/dhr/main/fig/aggretriever_teaser.png" width="600">
</p>
The associated GitHub repository for fine-tuning is available [here](https://github.com/castorini/dhr) and the reproduce from pyserini is [here]. The following variants are also available:
Model | Initialization | MARCO Dev | Encoder Path
|---|---|---|---
aggretriever-distilbert | distilbert-base-uncased | 34.1 | [castorini/aggretriever-distilbert](https://huggingface.co/castorini/aggretriever-distilbert)
aggretriever-cocondenser | Luyu/co-condenser-marco | 36.2 | [castorini/aggretriever-cocondenser](https://huggingface.co/castorini/aggretriever-cocondenser)
## Usage (HuggingFace Transformers)
Using the model directly available in HuggingFace transformers. We use the implemented Aggretriever from pyserini [here](https://github.com/castorini/pyserini/blob/master/pyserini/encode/_aggretriever.py).
```python
from pyserini.encode._aggretriever import AggretrieverQueryEncoder
from pyserini.encode._aggretriever import AggretrieverDocumentEncoder
model_name = '/store/scratch/s269lin/experiments/aggretriever/hf_model/aggretriever-cocondenser'
query_encoder = AggretrieverQueryEncoder(model_name, device='cpu')
context_encoder = AggretrieverDocumentEncoder(model_name, device='cpu')
query = ["Where was Marie Curie born?"]
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder.encode(query)
ctx_emb = context_encoder.encode(contexts)
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 45.56658
score2 = query_emb @ ctx_emb[1] # 45.81762
```
|
D3xter1922/electra-base-discriminator-finetuned-cola
|
[
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-500-MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-500-MLM
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.13.2
|
DJSammy/bert-base-swedish-uncased_BotXO-ai
|
[
"pytorch",
"transformers"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
license: creativeml-openrail-m
language:
- en
---
epiCRealism v1 + ChilloutMix Ni fp32 fix 0.6 Weighted Sum >> (1)
LOFI v2 + NewMarsMix R 11 0.55 Weighted Sum >> (2)
(1) + (2) 0.4 Weighted Sum >> (3)
(3) + RetMix V2 0.3 Weighted Sum >> ZincMix
|
DKpro000/DialoGPT-small-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-common_voice-tr-demo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: COMMON_VOICE - TR
type: common_voice
config: tr
split: test
args: 'Config: tr, Training split: train+validation, Eval split: test'
metrics:
- name: Wer
type: wer
value: 0.35205801246042284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3760
- Wer: 0.3521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5999 | 1.0 |
| No log | 1.83 | 200 | 2.9942 | 0.9999 |
| No log | 2.75 | 300 | 0.9031 | 0.7883 |
| No log | 3.67 | 400 | 0.5930 | 0.6226 |
| 3.1501 | 4.59 | 500 | 0.4967 | 0.5234 |
| 3.1501 | 5.5 | 600 | 0.4888 | 0.5053 |
| 3.1501 | 6.42 | 700 | 0.4393 | 0.4745 |
| 3.1501 | 7.34 | 800 | 0.4362 | 0.4370 |
| 3.1501 | 8.26 | 900 | 0.4384 | 0.4224 |
| 0.2259 | 9.17 | 1000 | 0.4169 | 0.4009 |
| 0.2259 | 10.09 | 1100 | 0.3965 | 0.3887 |
| 0.2259 | 11.01 | 1200 | 0.4072 | 0.3840 |
| 0.2259 | 11.93 | 1300 | 0.3937 | 0.3703 |
| 0.2259 | 12.84 | 1400 | 0.3901 | 0.3655 |
| 0.1024 | 13.76 | 1500 | 0.3835 | 0.3559 |
| 0.1024 | 14.68 | 1600 | 0.3780 | 0.3534 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Daltcamalea01/Camaleaodalt
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-07T08:29:29Z
|
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-uncased-whole-word-masking-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 0.1796 |
| 0.21 | 2.0 | 500 | 0.2187 |
| 0.21 | 3.0 | 750 | 0.3100 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
DanBot/TCRsynth
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-07T08:34:20Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Raiden-1001/poca-Soccerv6.1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Danih1502/t5-base-finetuned-en-to-de
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.62
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pabloyesteb/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DannyMichael/ECU911
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
datasets:
- tatsu-lab/alpaca
- yizhongw/self_instruct
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- en
- es
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
---
# Note
## Orginal LLaMA Weights Is not used in this model so it's MIT Licenced
I used Alpaca Prompting Method
```python
def prompt_to_instruction(instruction, input_=None, response_=None, eos='<|endoftext|>'):
if input_ is None:
st1_prompting = f'Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\n{instruction}\n\n'
else:
st1_prompting = f'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n\n{instruction}\n\n### Input:\n\n{input_}\n\n'
resp = f'### Response:\n\n{response_}{eos}' if response_ is not None else '### Response:\n\n'
return st1_prompting + resp
```
# Using Model In Transformers
```python
import torch
from transformers import GenerationConfig, LlamaTokenizer, LlamaForCausalLM
# Loading Tokenizer
tokenizer = LlamaTokenizer.from_pretrained("erfanzar/LGeM-7B")
# Generation Config
gf = GenerationConfig(
temperature=1,
top_p=0.75,
top_k=40,
max_new_tokens=256,
num_beams=4,
)
# Loading Model
model = LlamaForCausalLM.from_pretrained(
"erfanzar/LGeM-7B",
load_in_8bit=True,
device_map="auto",
torch_dtype=torch.float16,
)
while True:
instruction = input('=> ')
input_ = None
prompt = prompt_to_instruction(instruction, input_)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"]
input_ids = input_ids.to(model.device)
with torch.no_grad():
prediction = model.generate(
input_ids=input_ids,
return_dict_in_generate=True,
generation_config=gc,
output_scores=True,
)
response = tokenizer.decode(prediction.sequences[0], skip_special_tokens=True)
print('\n\n\n')
print(response[len(prompt)+1:])
print('\n\n')
```
# Using Model in OST
## [Open Source Transformers](https://github.com/erfanzar/OST-OpenSourceTransformers)
### LGeM 🚀
- what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initialization of the first train of the main model (weights are available) I used pre weights from Alpaca LoRA (open source)
- it's Decoder Only
- built-in Pytorch
- you can simply import models like
```python
from modules import LGeMForCausalLM
```
- and Training code is available at LGeM-Train.py (check source)
- training parameters
- - learning rate 1e-4
- - AdamW (weight decay 1e-2)
- - batch 2
- - A 100 80GB used for training (4 X)
``` shell
python3 LGeM-train.py
```
|
DarshanDeshpande/marathi-distilbert
|
[
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| null |
---
license: apache-2.0
language:
- en
---
This project introduces Tencent’s Multilingual Machine Translation System for the WMT2022 Large-Scale African Translation shared task.
For more details, please refer to the github repo: [WMT2022-Large-Scale-African](https://github.com/wxjiao/WMT2022-Large-Scale-African).
|
Daryaflp/roberta-retrained_ru_covid
|
[
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| 2023-04-07T09:26:09Z
|
Original model: https://huggingface.co/dvruette/llama-13b-pretrained-sft-do2
|
DataikuNLP/average_word_embeddings_glove.6B.300d
|
[
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388215940740009987/lJp1cKMS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1561520263640096769/XNoPDxwi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brazzers & BLACKED</div>
<div style="text-align: center; font-size: 14px;">@blacked_com-brazzers</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brazzers & BLACKED.
| Data | Brazzers | BLACKED |
| --- | --- | --- |
| Tweets downloaded | 3242 | 3248 |
| Retweets | 599 | 247 |
| Short tweets | 367 | 452 |
| Tweets kept | 2276 | 2549 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ppy2cke/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @blacked_com-brazzers's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s3b2ft39) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s3b2ft39/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/blacked_com-brazzers')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DataikuNLP/paraphrase-albert-small-v2
|
[
"pytorch",
"albert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"AlbertModel"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 628
| null |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
---
A dream style LoRA for Stable Diffusion webui. \
一个梦幻风格的LoRA。 \
Sample: \
例图: \


|
Davlan/bert-base-multilingual-cased-ner-hrl
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 269,898
| 2023-04-07T10:15:51Z
|
---
thumbnail: "https://motionbert.github.io/assets/teaser.gif"
tags:
- 3D Human Pose Estimation
- Skeleton-based Action Recognition
- Mesh Recovery
arxiv: "2210.06551"
---
# MotionBERT
This is the official PyTorch implementation of the paper *"[Learning Human Motion Representations: A Unified Perspective](https://arxiv.org/pdf/2210.06551.pdf)"*.
<img src="https://motionbert.github.io/assets/teaser.gif" alt="" style="zoom: 60%;" />
## Installation
```bash
conda create -n motionbert python=3.7 anaconda
conda activate motionbert
# Please install PyTorch according to your CUDA version.
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirements.txt
```
## Getting Started
| Task | Document |
| --------------------------------- | ------------------------------------------------------------ |
| Pretrain | [docs/pretrain.md](docs/pretrain.md) |
| 3D human pose estimation | [docs/pose3d.md](docs/pose3d.md) |
| Skeleton-based action recognition | [docs/action.md](docs/action.md) |
| Mesh recovery | [docs/mesh.md](docs/mesh.md) |
## Applications
### In-the-wild inference (for custom videos)
Please refer to [docs/inference.md](docs/inference.md).
### Using MotionBERT for *human-centric* video representations
```python
'''
x: 2D skeletons
type = <class 'torch.Tensor'>
shape = [batch size * frames * joints(17) * channels(3)]
MotionBERT: pretrained human motion encoder
type = <class 'lib.model.DSTformer.DSTformer'>
E: encoded motion representation
type = <class 'torch.Tensor'>
shape = [batch size * frames * joints(17) * channels(512)]
'''
E = MotionBERT.get_representation(x)
```
> **Hints**
>
> 1. The model could handle different input lengths (no more than 243 frames). No need to explicitly specify the input length elsewhere.
> 2. The model uses 17 body keypoints ([H36M format](https://github.com/JimmySuen/integral-human-pose/blob/master/pytorch_projects/common_pytorch/dataset/hm36.py#L32)). If you are using other formats, please convert them before feeding to MotionBERT.
> 3. Please refer to [model_action.py](lib/model/model_action.py) and [model_mesh.py](lib/model/model_mesh.py) for examples of (easily) adapting MotionBERT to different downstream tasks.
> 4. For RGB videos, you need to extract 2D poses ([inference.md](docs/inference.md)), convert the keypoint format ([dataset_wild.py](lib/data/dataset_wild.py)), and then feed to MotionBERT ([infer_wild.py](infer_wild.py)).
>
## Model Zoo
<img src="https://motionbert.github.io/assets/demo.gif" alt="" style="zoom: 50%;" />
| Model | Download Link | Config | Performance |
| ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------- |
| MotionBERT (162MB) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/pretrain/MB_release/latest_epoch.bin) | [pretrain/MB_pretrain.yaml](configs/pretrain/MB_pretrain.yaml) | - |
| MotionBERT-Lite (61MB) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/pretrain/MB_lite/latest_epoch.bin) | [pretrain/MB_lite.yaml](configs/pretrain/MB_lite.yaml) | - |
| 3D Pose (H36M-SH, scratch) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/pose3d/MB_train_h36m/best_epoch.bin) | [pose3d/MB_train_h36m.yaml](configs/pose3d/MB_train_h36m.yaml) | 39.2mm (MPJPE) |
| 3D Pose (H36M-SH, ft) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/pose3d/FT_MB_release_MB_ft_h36m/best_epoch.bin) | [pose3d/MB_ft_h36m.yaml](configs/pose3d/MB_ft_h36m.yaml) | 37.2mm (MPJPE) |
| Action Recognition (x-sub, ft) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/action/FT_MB_release_MB_ft_NTU60_xsub/best_epoch.bin) | [action/MB_ft_NTU60_xsub.yaml](configs/action/MB_ft_NTU60_xsub.yaml) | 97.2% (Top1 Acc) |
| Action Recognition (x-view, ft) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/action/FT_MB_release_MB_ft_NTU60_xview/best_epoch.bin) | [action/MB_ft_NTU60_xview.yaml](configs/action/MB_ft_NTU60_xview.yaml) | 93.0% (Top1 Acc) |
| Mesh (with 3DPW, ft) | [HuggingFace](https://huggingface.co/walterzhu/MotionBERT/blob/main/checkpoint/mesh/FT_MB_release_MB_ft_pw3d/best_epoch.bin) | [mesh/MB_ft_pw3d.yaml](configs/mesh/MB_ft_pw3d.yaml) | 88.1mm (MPVE) |
In most use cases (especially with finetuning), `MotionBERT-Lite` gives a similar performance with lower computation overhead.
## TODO
- [x] Scripts and docs for pretraining
- [x] Demo for custom videos
## Citation
If you find our work useful for your project, please consider citing the paper:
```bibtex
@article{motionbert2022,
title = {Learning Human Motion Representations: A Unified Perspective},
author = {Zhu, Wentao and Ma, Xiaoxuan and Liu, Zhaoyang and Liu, Libin and Wu, Wayne and Wang, Yizhou},
year = {2022},
journal = {arXiv preprint arXiv:2210.06551},
}
```
|
Davlan/mbart50-large-yor-eng-mt
|
[
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/31216/sn-chapayev-azur-lane-loha-ver
|
Davlan/xlm-roberta-base-finetuned-swahili
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 40
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Periramm/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.