modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
biglam/autotrain-beyond-the-books | 2023-06-26T12:27:32.000Z | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"text-classification",
"autotrain",
"lam",
"en",
"dataset:biglam/on_the_books",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | biglam | null | null | biglam/autotrain-beyond-the-books | 0 | 2 | transformers | 2023-06-07T09:59:47 | ---
tags:
- autotrain
- text-classification
- lam
language:
- en
widget:
- text: >-
Neither this act nor any other act relating to said Cherokee Indians of
Robeson County shall be construed so as to impose on said Indians any
powers, privileges, rights or immunities, or
- text: >-
That Section one hundred and twenty-two eightythree of the General Statutes
of North Carolina is hereby amended by striking out the word insane in the
catch line and in lines two, four, nine and fifteen and inserting in lieu
thereof the words mentally disordered.
datasets:
- biglam/on_the_books
co2_eq_emissions:
emissions: 0.2641096478393395
license: mit
library_name: transformers
metrics:
- accuracy
- f1
- recall
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 64771135885
- CO2 Emissions (in grams): 0.2641
## Validation Metrics
- Loss: 0.057
- Accuracy: 0.986
- Precision: 0.988
- Recall: 0.992
- AUC: 0.998
- F1: 0.990
## Usage
This model is trained on a dataset of historical documents related to Jim Crow laws in the United States.
The model was developed by drawing on the expertise of scholars and analyzing legal texts from various states, with the goal of identifying similarities between different states' Jim Crow laws.
As such, this model may be useful for researchers or policymakers interested in understanding the history of racial discrimination in the US legal system.
The easiest way to use this model locally is via the [Transformers](https://huggingface.co/docs/transformers/index) library [pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial).
Once you have [installed transformers](https://huggingface.co/docs/transformers/installation), you can run the following code.
This will download and cache the model locally and allow you to make predictions on text input.
```
from transformers import pipeline
classifier = pipeline('text-classification', "biglam/autotrain-beyond-the-books")
classifier(text)
```
This will return predictions in the following format:
```
[{'label': 'no_jim_crow', 'score': 0.9718555212020874}]
``` | 2,149 | [
[
-0.016571044921875,
-0.0273590087890625,
0.0227508544921875,
-0.0216217041015625,
-0.0038089752197265625,
-0.01041412353515625,
0.014068603515625,
-0.032196044921875,
0.01110076904296875,
0.02813720703125,
-0.0296173095703125,
-0.045440673828125,
-0.050872802734... |
mariabashkeva/digits | 2023-06-07T11:26:09.000Z | [
"keras",
"region:us"
] | null | mariabashkeva | null | null | mariabashkeva/digits | 0 | 2 | keras | 2023-06-07T11:07:38 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерованина на наборе данных mnist
 | 114 | [
[
-0.0114898681640625,
-0.0592041015625,
0.01163482666015625,
0.003330230712890625,
-0.061798095703125,
0.037445068359375,
0.0273284912109375,
0.0172576904296875,
0.068359375,
0.0236663818359375,
-0.0282440185546875,
-0.051177978515625,
-0.0462646484375,
-0.01... |
olgachertash/digits | 2023-06-07T13:06:48.000Z | [
"keras",
"region:us"
] | null | olgachertash | null | null | olgachertash/digits | 0 | 2 | keras | 2023-06-07T13:01:35 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерованина на наборе данных mnist
 | 114 | [
[
-0.0114898681640625,
-0.0592041015625,
0.01163482666015625,
0.003330230712890625,
-0.061798095703125,
0.037445068359375,
0.0273284912109375,
0.0172576904296875,
0.068359375,
0.0236663818359375,
-0.0282440185546875,
-0.051177978515625,
-0.0462646484375,
-0.01... |
EducativeCS2023/roberta-similarity | 2023-06-07T13:31:43.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | EducativeCS2023 | null | null | EducativeCS2023/roberta-similarity | 0 | 2 | transformers | 2023-06-07T13:20:52 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-similarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-similarity
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5172
- Accuracy: 0.834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6747 | 0.16 | 10 | 0.6562 | 0.672 |
| 0.5355 | 0.32 | 20 | 0.5163 | 0.772 |
| 0.5374 | 0.48 | 30 | 0.5901 | 0.74 |
| 0.5064 | 0.63 | 40 | 0.4904 | 0.782 |
| 0.4241 | 0.79 | 50 | 0.5793 | 0.73 |
| 0.5484 | 0.95 | 60 | 0.5381 | 0.776 |
| 0.5441 | 1.11 | 70 | 0.5375 | 0.764 |
| 0.445 | 1.27 | 80 | 0.5096 | 0.792 |
| 0.4436 | 1.43 | 90 | 0.5617 | 0.814 |
| 0.4677 | 1.59 | 100 | 0.6145 | 0.796 |
| 0.4306 | 1.75 | 110 | 0.6105 | 0.814 |
| 0.3197 | 1.9 | 120 | 0.5112 | 0.772 |
| 0.3373 | 2.06 | 130 | 0.5168 | 0.818 |
| 0.3128 | 2.22 | 140 | 0.5007 | 0.824 |
| 0.3286 | 2.38 | 150 | 0.4900 | 0.83 |
| 0.476 | 2.54 | 160 | 0.4989 | 0.79 |
| 0.413 | 2.7 | 170 | 0.6129 | 0.748 |
| 0.3811 | 2.86 | 180 | 0.5137 | 0.818 |
| 0.3224 | 3.02 | 190 | 0.5178 | 0.806 |
| 0.2917 | 3.17 | 200 | 0.5382 | 0.802 |
| 0.3696 | 3.33 | 210 | 0.5610 | 0.822 |
| 0.3019 | 3.49 | 220 | 0.7040 | 0.792 |
| 0.3354 | 3.65 | 230 | 0.5342 | 0.826 |
| 0.2854 | 3.81 | 240 | 0.5047 | 0.832 |
| 0.3079 | 3.97 | 250 | 0.5124 | 0.83 |
| 0.3271 | 4.13 | 260 | 0.5876 | 0.808 |
| 0.276 | 4.29 | 270 | 0.5271 | 0.824 |
| 0.2519 | 4.44 | 280 | 0.5309 | 0.832 |
| 0.2107 | 4.6 | 290 | 0.5186 | 0.834 |
| 0.2471 | 4.76 | 300 | 0.5191 | 0.838 |
| 0.2751 | 4.92 | 310 | 0.5172 | 0.834 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
| 3,154 | [
[
-0.039215087890625,
-0.03948974609375,
0.016998291015625,
0.00139617919921875,
-0.004547119140625,
-0.00659942626953125,
-0.0008120536804199219,
-0.00675201416015625,
0.038604736328125,
0.0222320556640625,
-0.050994873046875,
-0.04931640625,
-0.05267333984375,
... |
sashatoropova/digits | 2023-06-07T14:11:08.000Z | [
"keras",
"region:us"
] | null | sashatoropova | null | null | sashatoropova/digits | 0 | 2 | keras | 2023-06-07T14:07:48 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерованина на наборе данных mnist
 | 114 | [
[
-0.0114898681640625,
-0.0592041015625,
0.01163482666015625,
0.003330230712890625,
-0.061798095703125,
0.037445068359375,
0.0273284912109375,
0.0172576904296875,
0.068359375,
0.0236663818359375,
-0.0282440185546875,
-0.051177978515625,
-0.0462646484375,
-0.01... |
cafbr/bert-simililarity-hist | 2023-06-07T15:22:40.000Z | [
"keras",
"region:us"
] | null | cafbr | null | null | cafbr/bert-simililarity-hist | 0 | 2 | keras | 2023-06-07T15:20:18 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 738 | [
[
-0.038055419921875,
-0.0411376953125,
0.0288848876953125,
0.005496978759765625,
-0.033966064453125,
-0.0165863037109375,
0.0011091232299804688,
-0.0008072853088378906,
0.0232696533203125,
0.021636962890625,
-0.044891357421875,
-0.048980712890625,
-0.034118652343... |
saruizn/trabajo | 2023-06-07T16:18:03.000Z | [
"keras",
"has_space",
"region:us"
] | null | saruizn | null | null | saruizn/trabajo | 0 | 2 | keras | 2023-06-07T15:42:29 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
saruizn/modelo | 2023-06-07T15:57:55.000Z | [
"keras",
"region:us"
] | null | saruizn | null | null | saruizn/modelo | 0 | 2 | keras | 2023-06-07T15:57:50 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
allenai/open-instruct-oasst1-7b | 2023-06-20T17:44:05.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst1",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | allenai | null | null | allenai/open-instruct-oasst1-7b | 0 | 2 | transformers | 2023-06-07T17:04:11 | ---
datasets:
- OpenAssistant/oasst1
language:
- en
---
# Open-Instruct Open Assistant 7B
This model is a 7B LLaMa model finetuned on the Open Assistant dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 32.9 | 29.7 | 6.0 | 6.5 | 20.4 | 29.5 | 26.8 | 7.8 | 10.1 | 20.4 | 47.8 | 23.8 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4,131 | [
[
-0.0267181396484375,
-0.042724609375,
0.0073394775390625,
0.00492095947265625,
-0.00904083251953125,
-0.0157623291015625,
-0.0251617431640625,
-0.0290985107421875,
-0.01032257080078125,
0.03240966796875,
-0.040985107421875,
-0.042938232421875,
-0.044921875,
... |
allenai/open-instruct-cot-7b | 2023-06-20T17:45:42.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | allenai | null | null | allenai/open-instruct-cot-7b | 0 | 2 | transformers | 2023-06-07T17:05:59 | ---
language:
- en
---
# Open-Instruct CoT 7B
This model is a 7B LLaMa model finetuned on the CoT dataset (a subset of Flan v2). *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 41.3 | 42.5 | 6.5 | 27.5 | 33.7 | 31.3 | 44.4 | 8.5 | 7.4 | 17.3 | 4.2 | 22.4 |
If you use this model, please cite our work, the llama paper, and the original dataset:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
``` | 3,871 | [
[
-0.02960205078125,
-0.041290283203125,
0.002239227294921875,
0.006855010986328125,
-0.00968170166015625,
-0.0099945068359375,
-0.0274505615234375,
-0.0357666015625,
-0.0139312744140625,
0.0293731689453125,
-0.042816162109375,
-0.041656494140625,
-0.0449523925781... |
Dantenator/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-07T19:58:07.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Dantenator | null | null | Dantenator/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-07T19:57:34 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 401.50 +/- 120.19
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dantenator -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dantenator -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Dantenator
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,765 | [
[
-0.044189453125,
-0.038726806640625,
0.01995849609375,
0.02532958984375,
-0.0107879638671875,
-0.01849365234375,
0.0102386474609375,
-0.012359619140625,
0.013427734375,
0.022918701171875,
-0.0716552734375,
-0.0362548828125,
-0.0248260498046875,
-0.0030364990... |
michaelfeil/ct2fast-RedPajama-INCITE-7B-Chat | 2023-06-07T20:39:16.000Z | [
"transformers",
"ctranslate2",
"int8",
"float16",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:OpenAssistant/oasst1",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | michaelfeil | null | null | michaelfeil/ct2fast-RedPajama-INCITE-7B-Chat | 0 | 2 | transformers | 2023-06-07T20:10:35 | ---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
widget:
- text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:"
example_title: "Email Writing"
- text: "<human>: Create a list of things to do in San Francisco\n<bot>:"
example_title: "Brainstorming"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [togethercomputer/RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
```bash
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
```
Converted on 2023-06-07 using
```
ct2-transformers-converter --model togethercomputer/RedPajama-INCITE-7B-Chat --output_dir /home/michael/tmp-ct2fast-RedPajama-INCITE-7B-Chat --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
Checkpoint compatible to [ctranslate2>=3.15.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-RedPajama-INCITE-7B-Chat"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# RedPajama-INCITE-7B-Chat
RedPajama-INCITE-7B-Chat was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
To prompt the chat model, use the following format:
```
<human>: [Instruction]
<bot>:
```
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist.
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.bfloat16)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-7B-Chat` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-7B-Chat` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-7B-Chat`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 79M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) | 9,855 | [
[
-0.037200927734375,
-0.0787353515625,
0.022979736328125,
0.0263824462890625,
0.0012731552124023438,
-0.0112762451171875,
-0.020660400390625,
-0.03668212890625,
0.02593994140625,
0.0190887451171875,
-0.04083251953125,
-0.02435302734375,
-0.05615234375,
-0.003... |
Brandulio/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-08T01:25:14.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Brandulio | null | null | Brandulio/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-08T01:24:39 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 131.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brandulio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brandulio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Brandulio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,762 | [
[
-0.043212890625,
-0.038848876953125,
0.0191497802734375,
0.02490234375,
-0.01117706298828125,
-0.0173492431640625,
0.009613037109375,
-0.01273345947265625,
0.01184844970703125,
0.022003173828125,
-0.071533203125,
-0.035186767578125,
-0.02471923828125,
-0.003... |
seanghay/bookmebus-roberta | 2023-06-08T06:13:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | seanghay | null | null | seanghay/bookmebus-roberta | 0 | 2 | transformers | 2023-06-08T06:05:21 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bookmebus-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bookmebus-roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1487
- Accuracy: 0.5121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 463 | 1.2076 | 0.4587 |
| 1.2869 | 2.0 | 926 | 1.1487 | 0.5121 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,377 | [
[
-0.0237884521484375,
-0.050079345703125,
0.024444580078125,
0.00098419189453125,
-0.0227203369140625,
-0.035888671875,
-0.0172271728515625,
-0.018707275390625,
0.0004668235778808594,
0.040557861328125,
-0.05377197265625,
-0.048553466796875,
-0.05633544921875,
... |
seanghay/bookmebus-sentiment-analysis | 2023-06-08T07:44:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | seanghay | null | null | seanghay/bookmebus-sentiment-analysis | 0 | 2 | transformers | 2023-06-08T06:15:42 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bookmebus-roberta-2
results: []
widget:
- text: អ្នកបើកឡានមិនគោរពច្បាប់
- text: រហ័សទាន់ចិត្ត
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bookmebus-roberta-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1144
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 249 | 1.2029 | 0.4931 |
| No log | 2.0 | 498 | 1.3241 | 0.5347 |
| 0.9919 | 3.0 | 747 | 1.3590 | 0.5 |
| 0.9919 | 4.0 | 996 | 1.4430 | 0.5139 |
| 0.7674 | 5.0 | 1245 | 1.5039 | 0.5486 |
| 0.7674 | 6.0 | 1494 | 1.6894 | 0.5208 |
| 0.5381 | 7.0 | 1743 | 1.9312 | 0.4861 |
| 0.5381 | 8.0 | 1992 | 1.9636 | 0.5208 |
| 0.3594 | 9.0 | 2241 | 2.0836 | 0.5 |
| 0.3594 | 10.0 | 2490 | 2.1144 | 0.5 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,939 | [
[
-0.029754638671875,
-0.044952392578125,
0.0186004638671875,
-0.001773834228515625,
-0.0152740478515625,
-0.0277557373046875,
-0.007091522216796875,
-0.0171051025390625,
0.00823211669921875,
0.035125732421875,
-0.053436279296875,
-0.04547119140625,
-0.05648803710... |
Jagannath/phishNet | 2023-06-08T06:58:06.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | Jagannath | null | null | Jagannath/phishNet | 0 | 2 | transformers | 2023-06-08T06:50:18 | ---
tags:
- generated_from_keras_callback
model-index:
- name: phishNet
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# phishNet
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 834 | [
[
-0.0223846435546875,
-0.037261962890625,
0.0189971923828125,
-0.00569915771484375,
-0.04241943359375,
-0.028076171875,
0.01123046875,
-0.0209808349609375,
0.0176544189453125,
0.039703369140625,
-0.046142578125,
-0.035430908203125,
-0.057891845703125,
-0.0201... |
sankar1535/slurp-intent_baseline-distilbert-base-uncased | 2023-06-08T09:49:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | sankar1535 | null | null | sankar1535/slurp-intent_baseline-distilbert-base-uncased | 1 | 2 | transformers | 2023-06-08T09:09:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: slurp-intent_baseline-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# slurp-intent_baseline-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6705
- Accuracy: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7487 | 1.0 | 720 | 1.0701 | 0.7811 |
| 1.1513 | 2.0 | 1440 | 0.6923 | 0.8367 |
| 0.4843 | 3.0 | 2160 | 0.6241 | 0.8569 |
| 0.3823 | 4.0 | 2880 | 0.6058 | 0.8637 |
| 0.2328 | 5.0 | 3600 | 0.6172 | 0.8701 |
| 0.2081 | 6.0 | 4320 | 0.6486 | 0.8667 |
| 0.1472 | 7.0 | 5040 | 0.6541 | 0.8682 |
| 0.1345 | 8.0 | 5760 | 0.6552 | 0.8716 |
| 0.1215 | 9.0 | 6480 | 0.6695 | 0.8701 |
| 0.0999 | 10.0 | 7200 | 0.6705 | 0.8701 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,954 | [
[
-0.03204345703125,
-0.040191650390625,
0.01456451416015625,
0.002323150634765625,
-0.018951416015625,
-0.020263671875,
0.0020751953125,
-0.00726318359375,
0.0167999267578125,
0.0222930908203125,
-0.049530029296875,
-0.04656982421875,
-0.061126708984375,
-0.0... |
dev-senolys/camembert_base_fine_tunned_categories_weight_v2 | 2023-06-09T09:28:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | dev-senolys | null | null | dev-senolys/camembert_base_fine_tunned_categories_weight_v2 | 0 | 2 | transformers | 2023-06-08T12:10:13 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert_base_fine_tunned_categories_weight_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_base_fine_tunned_categories_weight_v2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 127 | 2.0729 |
| No log | 2.0 | 254 | 1.9083 |
| No log | 3.0 | 381 | 1.8186 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,403 | [
[
-0.0249786376953125,
-0.02923583984375,
0.0121917724609375,
0.012603759765625,
-0.0352783203125,
-0.0242919921875,
-0.003055572509765625,
-0.00373077392578125,
0.00922393798828125,
0.038543701171875,
-0.057891845703125,
-0.049468994140625,
-0.0450439453125,
... |
Yandexxxx/DrawListner | 2023-06-20T12:44:52.000Z | [
"keras",
"region:us"
] | null | Yandexxxx | null | null | Yandexxxx/DrawListner | 0 | 2 | keras | 2023-06-08T13:32:31 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.004390716552734375,
-0.058319091796875,
0.0165863037109375,
0.005924224853515625,
-0.055572509765625,
0.03607177734375,
0.0265960693359375,
0.01480865478515625,
0.07196044921875,
0.0240478515625,
-0.032562255859375,
-0.048095703125,
-0.05169677734375,
-0.... |
yankihue/gpt2-tr-uncontrolled-classification-news-economics-final | 2023-06-08T19:45:03.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | yankihue | null | null | yankihue/gpt2-tr-uncontrolled-classification-news-economics-final | 0 | 2 | transformers | 2023-06-08T19:41:37 | mean:
rewards (before) -0.553348
rewards (after) 4.374639
dtype: float64
median:
rewards (before) -0.721978
rewards (after) 5.523491
dtype: float64 | 163 | [
[
-0.040679931640625,
0.0016498565673828125,
0.01502227783203125,
0.0052032470703125,
-0.045867919921875,
-0.025482177734375,
0.0062255859375,
0.019622802734375,
0.07562255859375,
0.0146484375,
-0.027252197265625,
-0.0312347412109375,
-0.08447265625,
0.0094604... |
devers93/Arab | 2023-06-08T20:34:42.000Z | [
"keras",
"region:us"
] | null | devers93 | null | null | devers93/Arab | 0 | 2 | keras | 2023-06-08T19:57:07 | ---
library_name: keras
---
# Модель для распознования цифр
Натренирована на наборе данных mnist
 | 111 | [
[
-0.0126953125,
-0.05767822265625,
0.01116943359375,
0.0049896240234375,
-0.0594482421875,
0.04046630859375,
0.023468017578125,
0.0169219970703125,
0.06683349609375,
0.0245513916015625,
-0.027191162109375,
-0.048187255859375,
-0.044525146484375,
-0.0100479125... |
TheBloke/starcoderplus-GGML | 2023-06-09T09:08:42.000Z | [
"transformers",
"starcoder",
"code",
"text-generation",
"dataset:bigcode/the-stack-dedup",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"model-index",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/starcoderplus-GGML | 34 | 2 | transformers | 2023-06-08T21:29:59 | ---
pipeline_tag: text-generation
inference: false
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
model-index:
- name: StarCoderPlus
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval (Prompted)
metrics:
- name: pass@1
type: pass@1
value: 26.7
verified: false
- task:
type: text-generation
dataset:
type: MMLU (5-shot)
name: MMLU
metrics:
- name: Accuracy
type: Accuracy
value: 45.1
verified: false
- task:
type: text-generation
dataset:
type: HellaSwag (10-shot)
name: HellaSwag
metrics:
- name: Accuracy
type: Accuracy
value: 77.3
verified: false
- task:
type: text-generation
dataset:
type: ARC (25-shot)
name: ARC
metrics:
- name: Accuracy
type: Accuracy
value: 48.9
verified: false
- task:
type: text-generation
dataset:
type: ThrutfulQA (0-shot)
name: ThrutfulQA
metrics:
- name: Accuracy
type: Accuracy
value: 37.9
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Bigcode's StarcoderPlus GGML
These files are GGML format model files for [Bigcode's StarcoderPlus](https://huggingface.co/bigcode/starcoderplus).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starcoderplus-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| starcoderplus.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| starcoderplus.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| starcoderplus.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| starcoderplus.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| starcoderplus.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Bigcode's StarcoderPlus
# StarCoderPlus
Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
[a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** English & 80+ Programming languages
## Use
### Intended use
The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
# Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Finetuning steps:** 150k
- **Finetuning tokens:** 600B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 14 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
| 11,222 | [
[
-0.03448486328125,
-0.049102783203125,
0.0237274169921875,
0.0125274658203125,
-0.0128326416015625,
-0.01227569580078125,
-0.022552490234375,
-0.0401611328125,
0.0178070068359375,
0.02386474609375,
-0.036651611328125,
-0.038818359375,
-0.04656982421875,
0.00... |
wiorz/legal_bert_sm_gen1_large | 2023-06-12T23:56:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/legal_bert_sm_gen1_large | 0 | 2 | transformers | 2023-06-09T03:52:18 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_sm_gen1_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_sm_gen1_large
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3342
- Accuracy: 0.8342
- Precision: 0.6462
- Recall: 0.3993
- F1: 0.4936
- D-index: 1.6166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 96000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.4153 | 1.0 | 1500 | 0.3934 | 0.8279 | 0.7013 | 0.2610 | 0.3804 | 1.5630 |
| 0.3682 | 2.0 | 3000 | 0.3595 | 0.8448 | 0.6802 | 0.4405 | 0.5347 | 1.6439 |
| 0.3509 | 3.0 | 4500 | 0.3559 | 0.847 | 0.7258 | 0.3923 | 0.5094 | 1.6314 |
| 0.3266 | 4.0 | 6000 | 0.3545 | 0.8484 | 0.7335 | 0.3944 | 0.5130 | 1.6339 |
| 0.2927 | 5.0 | 7500 | 0.3728 | 0.8519 | 0.7251 | 0.4323 | 0.5417 | 1.6506 |
| 0.265 | 6.0 | 9000 | 0.3836 | 0.8511 | 0.7019 | 0.4594 | 0.5554 | 1.6581 |
| 0.2284 | 7.0 | 10500 | 0.4332 | 0.8477 | 0.6611 | 0.5076 | 0.5743 | 1.6688 |
| 0.1903 | 8.0 | 12000 | 0.4834 | 0.8452 | 0.6970 | 0.4166 | 0.5215 | 1.6368 |
| 0.1527 | 9.0 | 13500 | 0.5702 | 0.8413 | 0.6809 | 0.4068 | 0.5093 | 1.6285 |
| 0.1296 | 10.0 | 15000 | 0.5942 | 0.8374 | 0.6585 | 0.4088 | 0.5044 | 1.6240 |
| 0.1158 | 11.0 | 16500 | 0.7754 | 0.8408 | 0.6680 | 0.4249 | 0.5194 | 1.6336 |
| 0.1054 | 12.0 | 18000 | 0.7936 | 0.8357 | 0.6062 | 0.5368 | 0.5694 | 1.6622 |
| 0.0879 | 13.0 | 19500 | 1.0568 | 0.8317 | 0.6971 | 0.2985 | 0.4180 | 1.5806 |
| 0.0834 | 14.0 | 21000 | 0.9730 | 0.8377 | 0.6393 | 0.4545 | 0.5313 | 1.6389 |
| 0.0744 | 15.0 | 22500 | 1.0385 | 0.8358 | 0.6390 | 0.4343 | 0.5172 | 1.6301 |
| 0.0675 | 16.0 | 24000 | 1.1625 | 0.8353 | 0.6305 | 0.4496 | 0.5249 | 1.6342 |
| 0.065 | 17.0 | 25500 | 1.2138 | 0.8325 | 0.6546 | 0.3652 | 0.4688 | 1.6034 |
| 0.0539 | 18.0 | 27000 | 1.2701 | 0.8334 | 0.6754 | 0.3409 | 0.4531 | 1.5967 |
| 0.0479 | 19.0 | 28500 | 1.2759 | 0.8367 | 0.6303 | 0.4681 | 0.5372 | 1.6420 |
| 0.0503 | 20.0 | 30000 | 1.3342 | 0.8342 | 0.6462 | 0.3993 | 0.4936 | 1.6166 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,618 | [
[
-0.0467529296875,
-0.04351806640625,
0.0127105712890625,
0.007045745849609375,
-0.0054168701171875,
-0.00725555419921875,
-0.0025691986083984375,
-0.00537109375,
0.048431396484375,
0.0266571044921875,
-0.04296875,
-0.050628662109375,
-0.04595947265625,
-0.01... |
peihsin/sqlfinal_combine_ch | 2023-06-09T09:05:00.000Z | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-classification",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | peihsin | null | null | peihsin/sqlfinal_combine_ch | 0 | 2 | transformers | 2023-06-09T07:19:03 | ---
license: apache-2.0
language:
- zh
metrics:
- bleu
pipeline_tag: text-classification
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
datasets:
- shibing624/nli_zh
language:
- zh
metrics:
- bleu
library_name: transformers | 323 | [
[
-0.0063018798828125,
-0.030853271484375,
0.01035308837890625,
0.0301361083984375,
-0.01788330078125,
0.0006046295166015625,
-0.003711700439453125,
-0.005695343017578125,
0.00402069091796875,
0.058349609375,
-0.07086181640625,
-0.02801513671875,
-0.0537109375,
... |
ba-claim/bert | 2023-07-02T21:18:04.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | ba-claim | null | null | ba-claim/bert | 0 | 2 | transformers | 2023-06-09T08:49:15 | ---
widget:
- text: "They're able to charge women more for the same exact procedure a man gets."
example_title: "Example: Yes"
- text: "There's no way they would give it up."
example_title: "Example: No"
---
# ba-claim/bert
## Model Details
Fine-tuned BERT Model for Claim Relevance Identification
Based on this model: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity
### Model Description
This Hugging Face model is a fine-tuned BERT model specifically developed for identifying relevant claims in the context of combating fake news.
The model was trained as part of a bachelor thesis project aimed at automating the fact-checking process by automatically identifying claims of interest.
The project participated in the CheckThat!2023 competition, focusing on task 1B, organized by the Conference and Labs of the Evaluation Forum (CLEF).
The CheckThat! lab provided relevant training data for predicting the checkworthiness of claims.
The data was analyzed, and various transformer models, including DistilBERT and ELECTRA, were experimented with to identify the most effective architecture.
Overall, this fine-tuned BERT model serves as a valuable tool in automating the identification of relevant claims, reducing the need for manual fact-checking, and contributing to efforts to combat the challenges posed by the widespread dissemination of fake news.
#### Examples
37440 There's no way they would give it up. No
37463 They're able to charge women more for the same exact procedure a man gets. Yes
## Training Details
|Hyperparameters||
|----|----|
| Learning Rate|9.459e-05|
| Weight Decay|2.737e-04|
| Batch Size|64|
| Number of Epochs|4|
| 1,695 | [
[
-0.03399658203125,
-0.051239013671875,
0.03289794921875,
0.0150299072265625,
-0.009429931640625,
-0.01058197021484375,
0.01104736328125,
-0.053466796875,
0.0095977783203125,
0.027923583984375,
-0.034942626953125,
-0.0116729736328125,
-0.040313720703125,
-0.0... |
Piun/my_defines | 2023-06-09T09:00:43.000Z | [
"keras",
"has_space",
"region:us"
] | null | Piun | null | null | Piun/my_defines | 0 | 2 | keras | 2023-06-09T08:56:40 | ---
library_name: keras
---
# Модель распознавания цифр
Обучена на наборе данных mnist
 | 104 | [
[
-0.0038299560546875,
-0.059600830078125,
0.016632080078125,
-0.005435943603515625,
-0.055633544921875,
0.0394287109375,
0.025360107421875,
0.0196075439453125,
0.05810546875,
0.0275726318359375,
-0.0285491943359375,
-0.043212890625,
-0.04718017578125,
-0.0215... |
Pogonyalov47/model47 | 2023-06-09T14:15:56.000Z | [
"keras",
"region:us"
] | null | Pogonyalov47 | null | null | Pogonyalov47/model47 | 0 | 2 | keras | 2023-06-09T14:07:01 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.004360198974609375,
-0.058319091796875,
0.0165557861328125,
0.005947113037109375,
-0.0555419921875,
0.0361328125,
0.0265960693359375,
0.0147857666015625,
0.07196044921875,
0.0240020751953125,
-0.032562255859375,
-0.048095703125,
-0.051666259765625,
-0.004... |
Au3609/digits | 2023-06-09T16:16:53.000Z | [
"keras",
"region:us"
] | null | Au3609 | null | null | Au3609/digits | 0 | 2 | keras | 2023-06-09T16:14:33 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
CyberTea/digits | 2023-06-09T17:06:53.000Z | [
"keras",
"region:us"
] | null | CyberTea | null | null | CyberTea/digits | 1 | 2 | keras | 2023-06-09T17:04:35 | ---
library_name: keras
---
# Модель для распознования цифр, натренированная на наборе данных mnist
 | 122 | [
[
-0.0028076171875,
-0.0531005859375,
0.0233917236328125,
0.0037631988525390625,
-0.053253173828125,
0.02984619140625,
0.04071044921875,
0.0218505859375,
0.056427001953125,
0.029510498046875,
-0.029327392578125,
-0.048858642578125,
-0.044189453125,
-0.01834106... |
mrm8488/gpt2-large-finetuned-jhegarty-texts | 2023-06-09T17:36:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | mrm8488 | null | null | mrm8488/gpt2-large-finetuned-jhegarty-texts | 0 | 2 | transformers | 2023-06-09T17:25:12 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-finetuned-jhegarty-texts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-finetuned-jhegarty-texts
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 3.9585 |
| No log | 2.0 | 50 | 3.9082 |
| No log | 3.0 | 75 | 3.9567 |
| No log | 4.0 | 100 | 4.0397 |
| No log | 5.0 | 125 | 4.1128 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.030029296875,
-0.047271728515625,
0.0235443115234375,
0.0079498291015625,
-0.0306854248046875,
-0.038787841796875,
-0.0171356201171875,
-0.023040771484375,
-0.00714874267578125,
0.022796630859375,
-0.04815673828125,
-0.036102294921875,
-0.05328369140625,
... |
Alim2003/digits | 2023-06-09T18:51:02.000Z | [
"keras",
"region:us"
] | null | Alim2003 | null | null | Alim2003/digits | 0 | 2 | keras | 2023-06-09T17:37:29 | ---
library_name: keras
---
#_Модель_для_распознавания_цифр
Натренирована на наборе данных mnist
 | 111 | [
[
-0.01039886474609375,
-0.06573486328125,
0.0077056884765625,
0.018463134765625,
-0.061248779296875,
0.03753662109375,
0.017578125,
0.02227783203125,
0.068115234375,
0.025054931640625,
-0.03277587890625,
-0.053436279296875,
-0.05340576171875,
-0.0067443847656... |
MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136096 | 2023-06-09T18:08:22.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:MarketingHHM/autotrain-data-predictiveoutcomes_transcripts",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | MarketingHHM | null | null | MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136096 | 0 | 2 | transformers | 2023-06-09T18:00:40 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- MarketingHHM/autotrain-data-predictiveoutcomes_transcripts
co2_eq_emissions:
emissions: 4.830666534700561
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 65215136096
- CO2 Emissions (in grams): 4.8307
## Validation Metrics
- Loss: 0.779
- Accuracy: 0.737
- Macro F1: 0.144
- Micro F1: 0.737
- Weighted F1: 0.656
- Macro Precision: 0.175
- Micro Precision: 0.737
- Weighted Precision: 0.650
- Macro Recall: 0.154
- Micro Recall: 0.737
- Weighted Recall: 0.737
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136096
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136096", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136096", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,381 | [
[
-0.0310211181640625,
-0.026153564453125,
0.00415802001953125,
0.014739990234375,
0.004886627197265625,
0.002834320068359375,
-0.006195068359375,
-0.01296234130859375,
0.004177093505859375,
0.01153564453125,
-0.052032470703125,
-0.0369873046875,
-0.05484008789062... |
MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136104 | 2023-06-09T18:03:13.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:MarketingHHM/autotrain-data-predictiveoutcomes_transcripts",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | MarketingHHM | null | null | MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136104 | 0 | 2 | transformers | 2023-06-09T18:01:15 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- MarketingHHM/autotrain-data-predictiveoutcomes_transcripts
co2_eq_emissions:
emissions: 0.27748265307081604
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 65215136104
- CO2 Emissions (in grams): 0.2775
## Validation Metrics
- Loss: 0.787
- Accuracy: 0.733
- Macro F1: 0.121
- Micro F1: 0.733
- Weighted F1: 0.620
- Macro Precision: 0.105
- Micro Precision: 0.733
- Weighted Precision: 0.537
- Macro Recall: 0.143
- Micro Recall: 0.733
- Weighted Recall: 0.733
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136104
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136104", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MarketingHHM/autotrain-predictiveoutcomes_transcripts-65215136104", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,383 | [
[
-0.02911376953125,
-0.0264434814453125,
0.002841949462890625,
0.01454925537109375,
0.00673675537109375,
0.0037822723388671875,
-0.007083892822265625,
-0.0134124755859375,
0.003116607666015625,
0.0104522705078125,
-0.05047607421875,
-0.03778076171875,
-0.05401611... |
Disskretnost/kucherenko_aleksandr | 2023-06-09T18:17:37.000Z | [
"keras",
"region:us"
] | null | Disskretnost | null | null | Disskretnost/kucherenko_aleksandr | 0 | 2 | keras | 2023-06-09T18:13:51 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
venomdenom/digits | 2023-06-09T19:02:30.000Z | [
"keras",
"region:us"
] | null | venomdenom | null | null | venomdenom/digits | 0 | 2 | keras | 2023-06-09T18:43:52 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерована на наборе данных mnist
 | 112 | [
[
-0.01284027099609375,
-0.05804443359375,
0.01137542724609375,
0.004894256591796875,
-0.059356689453125,
0.040496826171875,
0.0236053466796875,
0.0166778564453125,
0.06689453125,
0.024505615234375,
-0.0272216796875,
-0.048248291015625,
-0.044342041015625,
-0.... |
msproper/digits | 2023-06-09T18:58:55.000Z | [
"keras",
"region:us"
] | null | msproper | null | null | msproper/digits | 0 | 2 | keras | 2023-06-09T18:53:43 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
Konstantin12/Bulatovv | 2023-06-09T20:05:33.000Z | [
"keras",
"region:us"
] | null | Konstantin12 | null | null | Konstantin12/Bulatovv | 0 | 2 | keras | 2023-06-09T20:02:41 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
N0vel/Praktika3_SemenovRS | 2023-06-09T20:26:49.000Z | [
"keras",
"region:us"
] | null | N0vel | null | null | N0vel/Praktika3_SemenovRS | 0 | 2 | keras | 2023-06-09T20:25:40 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
DunnBC22/van-base-Brain_Tumors_Image_Classification | 2023-07-23T02:07:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"van",
"image-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | DunnBC22 | null | null | DunnBC22/van-base-Brain_Tumors_Image_Classification | 1 | 2 | transformers | 2023-06-09T21:06:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: van-base-Brain_Tumors_Image_Classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7918781725888325
language:
- en
pipeline_tag: image-classification
---
<h1>van-base-Brain_Tumors_Image_Classification</h1>
This model is a fine-tuned version of [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base).
It achieves the following results on the evaluation set:
- Loss: 1.7847
- Accuracy: 0.7919
- Weighted f1: 0.7588
- Micro f1: 0.7919
- Macro f1: 0.7665
- Weighted recall: 0.7919
- Micro recall: 0.7919
- Macro recall: 0.7865
- Weighted precision: 0.8505
- Micro precision: 0.7919
- Macro precision: 0.8675
<div style="text-align: center;">
<h2>
Model Description
</h2>
<a href=“https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Brain%20Tumors%20Image%20Classification%20Comparison/VAN%20-%20Image%20Classification.ipynb”>
Click here for the code that I used to create this model.
</a>
This project is part of a comparison of seventeen (17) transformers.
<a href="https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Brain%20Tumors%20Image%20Classification%20Comparison/README.md">
Click here to see the README markdown file for the full project.
</a>
<h2>
Intended Uses & Limitations
</h2>
This model is intended to demonstrate my ability to solve a complex problem using technology.
<h2>
Training & Evaluation Data
</h2>
<a href="https://www.kaggle.com/datasets/sartajbhuvaji/brain-tumor-classification-mri">
Brain Tumor Image Classification Dataset
</a>
<h2>
Sample Images
</h2>
<img src="https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Brain%20Tumors%20Image%20Classification%20Comparison/Images/Sample%20Images.png" />
<h2>
Class Distribution of Training Dataset
</h2>
<img src="https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Brain%20Tumors%20Image%20Classification%20Comparison/Images/Class%20Distribution%20-%20Training%20Dataset.png"/>
<h2>
Class Distribution of Evaluation Dataset
</h2>
<img src="https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Brain%20Tumors%20Image%20Classification%20Comparison/Images/Class%20Distribution%20-%20Testing%20Dataset.png"/>
</div>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 1.3357 | 1.0 | 180 | 1.5273 | 0.7183 | 0.6631 | 0.7183 | 0.6695 | 0.7183 | 0.7183 | 0.7058 | 0.8219 | 0.7183 | 0.8420 |
| 1.3357 | 2.0 | 360 | 1.9359 | 0.7792 | 0.7314 | 0.7792 | 0.7411 | 0.7792 | 0.7792 | 0.7764 | 0.8467 | 0.7792 | 0.8636 |
| 0.1229 | 3.0 | 540 | 1.7847 | 0.7919 | 0.7588 | 0.7919 | 0.7665 | 0.7919 | 0.7919 | 0.7865 | 0.8505 | 0.7919 | 0.8675 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 4,416 | [
[
-0.06121826171875,
-0.0364990234375,
0.0308990478515625,
0.00838470458984375,
-0.00966644287109375,
-0.0214691162109375,
-0.0107879638671875,
-0.0188140869140625,
0.0218963623046875,
0.0170135498046875,
-0.048797607421875,
-0.054779052734375,
-0.052154541015625,... |
ProgramadorArtificial/wav2vec2-large-xlsr-53-portuguese | 2023-06-13T23:42:58.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Wav2Vec2",
"speech-to-text",
"pt",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:falabrasil/Audio_Corpora-Bases_de_áudio",
"dataset:lucasgris/wav2vec4bp",
"dataset:Edresson/TTS-Portuguese-Corpus",
"dataset:voxfo... | automatic-speech-recognition | ProgramadorArtificial | null | null | ProgramadorArtificial/wav2vec2-large-xlsr-53-portuguese | 0 | 2 | transformers | 2023-06-09T22:17:06 | ---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_13_0
- falabrasil/Audio_Corpora-Bases_de_áudio
- lucasgris/wav2vec4bp
- Edresson/TTS-Portuguese-Corpus
- voxforge/voxforge-pt-dataset
- programadorartificial/custom_dataset
language:
- pt
metrics:
- wer
- cer
pipeline_tag: automatic-speech-recognition
tags:
- Wav2Vec2
- speech-to-text
---
# Wav2Vec 2.0 - Brazilian Portuguese
This model is a fine-tuned of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-portuguese) model with the following datasets:
- [Common Voice 13.0](https://commonvoice.mozilla.org/pt/datasets)
- [FalaBrasil](https://github.com/falabrasil/gitlab-resources)
- [Multilingual Librispeech (MLS) Portuguese](http://www.openslr.org/94/)
- [TTS-Portuguese-Corpus](https://github.com/Edresson/TTS-Portuguese-Corpus)
- [VoxForge](https://www.voxforge.org/pt/Downloads)
- Custom_dataset - With me talking - Not available
All datasets were pre-processed and cleaned (trying to keep only brazilian speakers), not original training, testing and validating files wer not used. The files used to training, testing and validating are in the "dataset_files" folder.
The model was fine-tuned using the [ProgramadorArificial/transformers](https://github.com/ProgramadorArtificial/transformers/tree/main) repository, more specific, the [speech-recognition](https://github.com/ProgramadorArtificial/transformers/tree/main/examples/pytorch/speech-recognition) folder.
```python
from transformers import AutoModelForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained('ProgramadorArtificial/wav2vec2-large-xlsr-53-portuguese')
model = AutoModelForCTC.from_pretrained('ProgramadorArtificial/wav2vec2-large-xlsr-53-portuguese')
```
Video explaining how this model was trained (portuguese): https://youtu.be/ff_NQVUwtx4
## Results test and validation dataset
### Test
| WER | CER |
|-------|------|
| 11.7% | 3.3% |
| Prediction | Real |
| ------------ |---------------------------------------------------------------------------------|
| ele é considerado por seus companheiros de tropa como um oficial moderado | ele é considerado por seus companheiros de tropa como um oficial moderado |
| os empréstimos do banco mundial exigem contrapartidas do governo beneficiados | os empréstimos do banco mundial exigem contrapartidas dos governos beneficiados |
| mwendel queiroz rodrigues | wendell queiroz rodrgues |
| virmontes | virmond |
| conversões pelo dólar turismo a mil seiscentos e oitenta reais | conversões pelo dólar turismo a mil seiscentos e oitenta reais |
| o grupo de moda são paulo promove o seu primeiro encontro | o grupo de moda são paulo promove o seu primeiro encontro |
| abandonou a frança e se fixou em são paulo | abandonou a frança e se fixou em são paulo |
| o avanço da tecnologia fez esta divisão perder o sentido | o avanço da tecnologia fez esta divisão perder o sentido |
| reservadamente confessa não entender o comportamento do presidente | reservadamente confessa não entender o comportamento do presidente |
| foi definido o campeonato estadual que começa no dia trinta | foi definido o campeonato estadual que começa no dia trinta |
### Validation
| WER | CER |
|------|------|
| 9.5% | 2.6% |
| Prediction | Real |
| ------------- | ------------- |
| pontex gestal | pontes gestal |
| o cruzeiro real continua valendo até o dia quinze de julho | o cruzeiro real continua valendo até o dia quinze de julho |
| o espaço fica portanto vago e disponível para o traficante | o espaço fica portanto vago e disponível para o traficante |
| os botões estão empilhados | os botões estão empilhados |
| as chances que apareciam eram perdidas pelos atacantes cruzeirenses | as chances que apareciam eram perdidas pelos atacantes cruzeirenses |
| possibilitar que a população se sinta identificada com o estado | possibilitar que a população se sinta identificada com o estado |
| os detentos serão transferidos das delegacias que estiverem lotadas | os detentos serão transferidos das delegacias que estiverem lotadas |
| a euforia pela pista menos quente causou três incidentes | a euforia pela pista menos quente causou três incidentes |
| almeida é presidente da liga independente das escolas de samba | almeida é presidente da liga independente das escolas de samba |
| os modos de seu pensamento as cendências de seu espírito e até as menores particularidades de sua vida é nessa fonte que deve beber o poeta brasileiro é dela que há de sair o verdadeiro poema nacional tal como eu o imagino | os modos de seu pensamento as tendências de seu espírito e até as menores particularidades de sua vida é nessa fonte que deve beber o poeta brasileiro é dela que há de sair o verdadeiro poema nacional tal como eu o imagino |
## Autor
* **Programador Artificial** - [GitHub](https://github.com/ProgramadorArtificial) - [YouTube](https://www.youtube.com/@ProgramadorArtificial) | 5,285 | [
[
-0.041656494140625,
-0.035400390625,
0.0101318359375,
0.035797119140625,
-0.038726806640625,
0.0194244384765625,
-0.012115478515625,
-0.045562744140625,
0.041595458984375,
0.0275421142578125,
-0.0399169921875,
-0.032135009765625,
-0.053070068359375,
0.001552... |
Zumaridi/opus-mt-en-sw-finetuned-en-to-sw | 2023-06-09T23:44:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Zumaridi | null | null | Zumaridi/opus-mt-en-sw-finetuned-en-to-sw | 0 | 2 | transformers | 2023-06-09T23:39:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-sw-finetuned-en-to-sw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-sw-finetuned-en-to-sw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6973
- Bleu: 58.329
- Gen Len: 17.8688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 113 | 0.8883 | 48.4933 | 20.2061 |
| No log | 2.0 | 226 | 0.7719 | 57.3125 | 18.0965 |
| No log | 3.0 | 339 | 0.7126 | 58.3384 | 17.8894 |
| No log | 4.0 | 452 | 0.6973 | 58.329 | 17.8688 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,627 | [
[
-0.028045654296875,
-0.0428466796875,
0.0124053955078125,
0.02056884765625,
-0.0263671875,
-0.041656494140625,
-0.020294189453125,
-0.019744873046875,
0.011627197265625,
0.03179931640625,
-0.059478759765625,
-0.047027587890625,
-0.048583984375,
0.00043654441... |
shamiulshifat/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-10T09:14:11.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | shamiulshifat | null | null | shamiulshifat/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-10T09:13:44 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 205.00 +/- 60.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shamiulshifat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shamiulshifat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shamiulshifat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,772 | [
[
-0.043670654296875,
-0.0396728515625,
0.019378662109375,
0.02490234375,
-0.011260986328125,
-0.017425537109375,
0.0101165771484375,
-0.01296234130859375,
0.01303863525390625,
0.022430419921875,
-0.07220458984375,
-0.034423828125,
-0.0254669189453125,
-0.0037... |
destrat/digits | 2023-06-10T11:38:11.000Z | [
"keras",
"region:us"
] | null | destrat | null | null | destrat/digits | 0 | 2 | keras | 2023-06-10T11:35:54 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.004405975341796875,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01480865478515625,
0.0718994140625,
0.0240325927734375,
-0.032562255859375,
-0.048095703125,
-0.05169677734375,
-0.0... |
mirfan899/kids_phoneme_sm_model | 2023-10-17T08:11:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mirfan899/kids_phoneme_sm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | mirfan899 | null | null | mirfan899/kids_phoneme_sm_model | 1 | 2 | transformers | 2023-06-10T11:56:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mirfan899/kids_phoneme_sm
base_model: facebook/wav2vec2-large-xlsr-53
model-index:
- name: kids_phoneme_sm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kids_phoneme_sm_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the https://huggingface.co/datasets/mirfan899/kids_phoneme_sm dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5405
- Cer: 0.2770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.2595 | 0.74 | 500 | 3.7094 | 1.0 |
| 2.8393 | 1.48 | 1000 | 3.2563 | 1.0 |
| 2.7916 | 2.22 | 1500 | 3.0450 | 1.0 |
| 1.9585 | 2.96 | 2000 | 1.0280 | 0.8428 |
| 1.0099 | 3.7 | 2500 | 0.6477 | 0.5162 |
| 0.7968 | 4.44 | 3000 | 0.5551 | 0.4592 |
| 0.6977 | 5.19 | 3500 | 0.5107 | 0.4065 |
| 0.609 | 5.93 | 4000 | 0.4763 | 0.3916 |
| 0.5941 | 6.67 | 4500 | 0.4817 | 0.3850 |
| 0.5411 | 7.41 | 5000 | 0.4755 | 0.3639 |
| 0.5021 | 8.15 | 5500 | 0.4649 | 0.3622 |
| 0.4884 | 8.89 | 6000 | 0.4630 | 0.3569 |
| 0.4484 | 9.63 | 6500 | 0.4675 | 0.3420 |
| 0.4432 | 10.37 | 7000 | 0.4192 | 0.3402 |
| 0.399 | 11.11 | 7500 | 0.4508 | 0.3310 |
| 0.4215 | 11.85 | 8000 | 0.4406 | 0.3345 |
| 0.366 | 12.59 | 8500 | 0.4620 | 0.3248 |
| 0.3708 | 13.33 | 9000 | 0.4594 | 0.3327 |
| 0.3352 | 14.07 | 9500 | 0.4649 | 0.3121 |
| 0.3468 | 14.81 | 10000 | 0.4413 | 0.3020 |
| 0.3283 | 15.56 | 10500 | 0.4948 | 0.2915 |
| 0.3222 | 16.3 | 11000 | 0.4870 | 0.3025 |
| 0.3081 | 17.04 | 11500 | 0.4779 | 0.2919 |
| 0.3099 | 17.78 | 12000 | 0.4927 | 0.2871 |
| 0.2485 | 18.52 | 12500 | 0.5013 | 0.2831 |
| 0.3163 | 19.26 | 13000 | 0.4929 | 0.2888 |
| 0.2555 | 20.0 | 13500 | 0.5234 | 0.2888 |
| 0.2705 | 20.74 | 14000 | 0.5259 | 0.2818 |
| 0.2632 | 21.48 | 14500 | 0.5105 | 0.2831 |
| 0.2374 | 22.22 | 15000 | 0.5284 | 0.2845 |
| 0.2565 | 22.96 | 15500 | 0.5237 | 0.2875 |
| 0.2394 | 23.7 | 16000 | 0.5368 | 0.2818 |
| 0.2458 | 24.44 | 16500 | 0.5386 | 0.2814 |
| 0.2383 | 25.19 | 17000 | 0.5366 | 0.2788 |
| 0.2152 | 25.93 | 17500 | 0.5320 | 0.2770 |
| 0.231 | 26.67 | 18000 | 0.5441 | 0.2779 |
| 0.2061 | 27.41 | 18500 | 0.5448 | 0.2796 |
| 0.245 | 28.15 | 19000 | 0.5413 | 0.2796 |
| 0.2119 | 28.89 | 19500 | 0.5379 | 0.2774 |
| 0.2155 | 29.63 | 20000 | 0.5405 | 0.2770 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
| 3,841 | [
[
-0.0408935546875,
-0.0411376953125,
0.0079498291015625,
0.00499725341796875,
-0.006763458251953125,
-0.006191253662109375,
0.0003228187561035156,
-0.005229949951171875,
0.03955078125,
0.0245819091796875,
-0.052947998046875,
-0.046905517578125,
-0.046722412109375... |
minoosh/finetuned_wav2vec2.0-base-on-IEMOCAP_4 | 2023-06-10T14:58:42.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | minoosh | null | null | minoosh/finetuned_wav2vec2.0-base-on-IEMOCAP_4 | 0 | 2 | transformers | 2023-06-10T12:25:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned_wav2vec2.0-base-on-IEMOCAP_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_wav2vec2.0-base-on-IEMOCAP_4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6468
- Accuracy: 0.6586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1816 | 1.0 | 112 | 1.1338 | 0.5177 |
| 1.1233 | 2.0 | 225 | 1.0683 | 0.5355 |
| 0.9672 | 3.0 | 337 | 0.9371 | 0.6031 |
| 0.6925 | 4.0 | 450 | 0.8086 | 0.6818 |
| 0.6162 | 5.0 | 562 | 0.7848 | 0.7051 |
| 0.4226 | 6.0 | 675 | 0.7377 | 0.7428 |
| 0.3047 | 7.0 | 787 | 0.7981 | 0.7251 |
| 0.2385 | 8.0 | 900 | 0.8558 | 0.7428 |
| 0.1954 | 9.0 | 1012 | 0.8869 | 0.7373 |
| 0.1454 | 10.0 | 1125 | 0.9470 | 0.7428 |
| 0.116 | 11.0 | 1237 | 1.0448 | 0.7306 |
| 0.1017 | 12.0 | 1350 | 1.1298 | 0.7295 |
| 0.1365 | 13.0 | 1462 | 1.1667 | 0.7450 |
| 0.0995 | 14.0 | 1575 | 1.1252 | 0.7428 |
| 0.0655 | 14.93 | 1680 | 1.1501 | 0.7395 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,336 | [
[
-0.038330078125,
-0.04107666015625,
0.00518035888671875,
0.00833892822265625,
-0.01226043701171875,
-0.026702880859375,
-0.0097198486328125,
-0.0187225341796875,
0.01629638671875,
0.020751953125,
-0.058868408203125,
-0.050811767578125,
-0.046539306640625,
-0... |
alicata/growth-stages | 2023-06-10T17:24:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | alicata | null | null | alicata/growth-stages | 0 | 2 | transformers | 2023-06-10T17:24:40 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: growth-stages
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6788991093635559
---
# growth-stages
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### adult

#### baby

#### child

#### elder

#### toddler
 | 846 | [
[
-0.037811279296875,
-0.042266845703125,
0.0107269287109375,
0.05364990234375,
-0.0240478515625,
0.00803375244140625,
0.0322265625,
-0.026153564453125,
0.02679443359375,
0.003082275390625,
-0.062225341796875,
-0.05419921875,
-0.04827880859375,
-0.004936218261... |
DanPietrow1/ppo-LunarLander-v2 | 2023-06-10T23:13:35.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | DanPietrow1 | null | null | DanPietrow1/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-06-10T22:34:15 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.20 +/- 24.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
yam8572/dqn-SpaceInvaders-v5 | 2023-06-10T23:30:31.000Z | [
"stable-baselines3",
"ALE/SpaceInvaders-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | yam8572 | null | null | yam8572/dqn-SpaceInvaders-v5 | 0 | 2 | stable-baselines3 | 2023-06-10T23:30:07 | ---
library_name: stable-baselines3
tags:
- ALE/SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ALE/SpaceInvaders-v5
type: ALE/SpaceInvaders-v5
metrics:
- type: mean_reward
value: 576.50 +/- 114.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **ALE/SpaceInvaders-v5**
This is a trained model of a **DQN** agent playing **ALE/SpaceInvaders-v5**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/SpaceInvaders-v5 -orga yam8572 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/SpaceInvaders-v5 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/SpaceInvaders-v5 -orga yam8572 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/SpaceInvaders-v5 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env ALE/SpaceInvaders-v5 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env ALE/SpaceInvaders-v5 -f logs/ -orga yam8572
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,680 | [
[
-0.0413818359375,
-0.034576416015625,
0.0213775634765625,
0.021728515625,
-0.007843017578125,
-0.016265869140625,
0.0101165771484375,
-0.01406097412109375,
0.0107421875,
0.0222320556640625,
-0.07073974609375,
-0.042236328125,
-0.0263824462890625,
-0.00218772... |
coyude/Nous-Hermes-13b-Chinese-GPTQ | 2023-06-13T00:47:18.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | coyude | null | null | coyude/Nous-Hermes-13b-Chinese-GPTQ | 4 | 2 | transformers | 2023-06-11T10:34:33 | ---
license: apache-2.0
language:
- zh
- en
library_name: transformers
pipeline_tag: text-generation
---
原始模型:https://huggingface.co/NousResearch/Nous-Hermes-13b
lora:https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
将Nous-Hermes-13b与chinese-alpaca-lora-13b进行合并,增强模型的中文能力,~~不过存在翻译腔~~
使用项目:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/qwopqwop200/GPTQ-for-LLaMa
**兼容AutoGPTQ和GPTQ-for-LLaMa**
**若选择GPTQ-for-LLaMa加载,请设置 Wbits=4 groupsize=128 model_type=llama**
**Compatible with AutoGPTQ and GPTQ-for-LLaMa**
**If selecting GPTQ-for-LLaMa loading, please set Wbits=4 groupsize=128 model_type=llama**
Text-generation-webui懒人包:
https://www.bilibili.com/read/cv23495183
---
Original model: https://huggingface.co/NousResearch/Nous-Hermes-13b
Lora: https://huggingface.co/ziqingyang/chinese-alpaca-lora-13b
The Nous-Hermes-13b model is combined with the chinese-alpaca-lora-13b model to enhance the model's Chinese language capabilities, ~~although there may be some translated tone~~.
Usage projects:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/qwopqwop200/GPTQ-for-LLaMa
**Compatible with AutoGPTQ and GPTQ-for-LLaMa**
**If you choose to load GPTQ-for-LLaMa, please set Wbits=4 groupsize=128 model_type=llama** | 1,279 | [
[
-0.05126953125,
-0.049041748046875,
0.0227508544921875,
0.045867919921875,
-0.055999755859375,
-0.01143646240234375,
0.01084136962890625,
-0.06329345703125,
0.036041259765625,
0.0279693603515625,
-0.0413818359375,
-0.047882080078125,
-0.04241943359375,
0.013... |
ml-projects/clickbait-ml_bert | 2023-06-13T11:38:55.000Z | [
"transformers",
"tf",
"onnx",
"bert",
"text-classification",
"generated_from_keras_callback",
"de",
"dataset:ml-projects/clickbait-ml_dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | ml-projects | null | null | ml-projects/clickbait-ml_bert | 0 | 2 | transformers | 2023-06-11T15:06:38 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: clickbait-ml_bert
results: []
language:
- de
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: Bundesweiter Großstreik beginnt - Züge, Busse und Flugzeuge stehen still
example_title: Normale Überschrift
- text: Bachelor in Paradise-Star Pamela Gil Matas Sohn ist da!
example_title: Clickbait Überschrift
- text: Du wirst nie glauben was hier geschah
example_title: Beispiel
datasets:
- ml-projects/clickbait-ml_dataset
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clickbait-ml_bert
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6057
- Validation Loss: 0.6160
- Train Accuracy: 0.8235
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7115 | 0.6299 | 0.8235 | 0 |
| 0.6071 | 0.6160 | 0.8235 | 1 |
| 0.5783 | 0.6160 | 0.8235 | 2 |
| 0.6057 | 0.6160 | 0.8235 | 3 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3 | 2,079 | [
[
-0.0447998046875,
-0.045654296875,
0.0211639404296875,
0.00885772705078125,
-0.0303497314453125,
-0.035186767578125,
-0.0125579833984375,
-0.0190582275390625,
0.012542724609375,
0.0126953125,
-0.0482177734375,
-0.051300048828125,
-0.055450439453125,
-0.01896... |
a-d/D-RL-BU-1-ppo-Huggy | 2023-06-11T16:03:59.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | a-d | null | null | a-d/D-RL-BU-1-ppo-Huggy | 0 | 2 | ml-agents | 2023-06-11T16:03:48 | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: a-d/D-RL-BU-1-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,323 | [
[
-0.042572021484375,
-0.047088623046875,
0.0166015625,
0.003467559814453125,
-0.0149078369140625,
0.016998291015625,
0.0140380859375,
-0.0219573974609375,
0.041900634765625,
0.033843994140625,
-0.049774169921875,
-0.046600341796875,
-0.0306549072265625,
-0.01... |
ihgn/gpt2-paraphrase | 2023-08-18T00:10:26.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | ihgn | null | null | ihgn/gpt2-paraphrase | 0 | 2 | transformers | 2023-06-11T16:25:56 | ---
language:
- en
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
---
def paraphrase(
question,
num_beams=5,
num_beam_groups=5,
num_return_sequences=1,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=128
):
input_ids = tokenizer(
f'paraphrase: {question}',
return_tensors="pt",
padding="longest",
max_length=max_length,
truncation=True,
).input_ids
outputs = model.generate(
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
num_beams=num_beams, num_beam_groups=num_beam_groups,
max_length=max_length, diversity_penalty=diversity_penalty
)
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return res | 1,010 | [
[
-0.01004791259765625,
-0.057220458984375,
0.0335693359375,
0.032012939453125,
-0.034698486328125,
-0.022491455078125,
-0.0058441162109375,
0.026947021484375,
-0.0035533905029296875,
0.0160675048828125,
-0.0221405029296875,
-0.039520263671875,
-0.041351318359375,... |
PocketDoc/Dans-PersonalityEngine-13b-gptq-4bit-128g | 2023-06-13T04:15:01.000Z | [
"transformers",
"llama",
"text-generation",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | PocketDoc | null | null | PocketDoc/Dans-PersonalityEngine-13b-gptq-4bit-128g | 1 | 2 | transformers | 2023-06-12T00:47:32 | ---
language:
- en
---
### Description:
This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.
### Prompt format:
Metharme
The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```
Some example prompts:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
More will be added at a later date.
### Perplexity Benchmarks:
- TBA
### Training information:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- GPTQ 4 bit LoRA
- 7 Epochs
- 64 / 32 R / A
- 2048 Cutoff
- 18 hours on 4x RTX 4090s
### Data used in training:
- TBA
### Models used:
For training:
https://huggingface.co/PocketDoc/llama-13b-gptq-4bit-128g
For merging:
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-13b-LoRA
and
https://huggingface.co/huggyllama/llama-13b
### Disclaimer:
It has not been aligned and no warranty is given for the quality or safety of its outputs. | 2,388 | [
[
-0.02581787109375,
-0.038848876953125,
0.02191162109375,
0.01531219482421875,
-0.0265045166015625,
0.0104217529296875,
-0.0032100677490234375,
-0.032501220703125,
0.019500732421875,
0.0227813720703125,
-0.0667724609375,
-0.033599853515625,
-0.0237579345703125,
... |
coyude/Chinese-plus-Wizard-Vicuna-13B-GPTQ | 2023-06-13T00:50:41.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | coyude | null | null | coyude/Chinese-plus-Wizard-Vicuna-13B-GPTQ | 5 | 2 | transformers | 2023-06-12T05:04:54 | ---
license: apache-2.0
language:
- zh
- en
library_name: transformers
pipeline_tag: text-generation
---
原始模型:https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF
lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b
将Wizard-Vicuna-13B-Uncensored-HF与chinese-llama-plus-lora-13b和chinese-alpaca-plus-lora-13b进行合并,增强模型的中文能力,~~不过存在翻译腔~~
使用项目:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/qwopqwop200/GPTQ-for-LLaMa
**兼容AutoGPTQ和GPTQ-for-LLaMa**
**若选择GPTQ-for-LLaMa加载,请设置 Wbits=4 groupsize=128 model_type=llama**
Text-generation-webui懒人包:
https://www.bilibili.com/read/cv23495183
---
Original model: https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF
lora:https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b
https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b
The Wizard-Vicuna-13B-Uncensored-HF model is combined with the chinese-alpaca-plus-lora-13b model and chinese-llama-plus-lora-13b model to enhance the model's Chinese language capabilities, ~~although there may be some translated tone~~.
Usage projects:
https://github.com/ymcui/Chinese-LLaMA-Alpaca
https://github.com/qwopqwop200/GPTQ-for-LLaMa
**Compatible with AutoGPTQ and GPTQ-for-LLaMa**
**If you choose to load GPTQ-for-LLaMa, please set Wbits=4 groupsize=128 model_type=llama** | 1,410 | [
[
-0.04583740234375,
-0.051177978515625,
0.01470947265625,
0.04510498046875,
-0.0545654296875,
-0.00798797607421875,
0.009033203125,
-0.06353759765625,
0.03692626953125,
0.0265045166015625,
-0.038299560546875,
-0.045501708984375,
-0.04547119140625,
0.014770507... |
NicoHi/whisper-base-voice-commands | 2023-08-14T15:48:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | NicoHi | null | null | NicoHi/whisper-base-voice-commands | 0 | 2 | transformers | 2023-06-12T13:56:33 | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-voice-commands
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-voice-commands
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1469 | 2.55 | 250 | 0.2371 | 0.0825 |
| 0.1369 | 5.1 | 500 | 0.2457 | 0.0 |
| 0.131 | 7.65 | 750 | 0.2549 | 0.0 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,368 | [
[
-0.018646240234375,
-0.03375244140625,
0.0167694091796875,
0.015655517578125,
-0.01422882080078125,
-0.0252685546875,
-0.01161956787109375,
-0.0078887939453125,
-0.0008220672607421875,
0.0311737060546875,
-0.06719970703125,
-0.056365966796875,
-0.050872802734375... |
predictia/europe_reanalysis_downscaler_convswin2sr | 2023-07-30T20:47:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"conv_swin2sr",
"climate",
"super-resolution",
"image-to-image",
"es",
"en",
"dataset:openclimatefix/era5",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | image-to-image | predictia | null | null | predictia/europe_reanalysis_downscaler_convswin2sr | 0 | 2 | transformers | 2023-06-12T14:18:26 | ---
license: apache-2.0
datasets:
- openclimatefix/era5
language:
- es
- en
metrics:
- mse
library_name: transformers
pipeline_tag: image-to-image
tags:
- climate
- transformers
- super-resolution
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative.
# Table of Contents
- [Model Card for Europe Reanalysis Super Resolution](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Authors](#authors)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Some cool model...
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** en, es
- **License:** apache-2.0
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
- [GitHub Repo](https://github.com/ECMWFCode4Earth/DeepR)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
### Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
- Mario Santa Cruz
- Antonio Pérez
- Javier Díez | 6,767 | [
[
-0.04742431640625,
-0.047393798828125,
0.0303192138671875,
-0.0088653564453125,
-0.005809783935546875,
-0.0161285400390625,
-0.005828857421875,
-0.043365478515625,
0.0019702911376953125,
0.051483154296875,
-0.062225341796875,
-0.0540771484375,
-0.041839599609375... |
ShaneEP77/tolkientexts | 2023-06-21T20:54:01.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"text generation",
"en",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | ShaneEP77 | null | null | ShaneEP77/tolkientexts | 0 | 2 | transformers | 2023-06-12T14:46:14 |
---
language: en
thumbnail: "https://www.billboard.com/wp-content/uploads/media/Middle-earth-Shadow-of-War-GAME-Screenshot-2017-billboard-1548.jpg"
tags:
- text generation
- pytorch
license: mit
---
### Tolkientexts Model
Welcome! This README.md aims to provide a synopsis of how this model was trained and fine-tuned. Additonally, code examples will be included with information on how to use this model.
## Description
This model was trained on 4 novels written by J.R.R. Tolkien that were accessed via open source from the internet and through (https://www.kaggle.com/), which is an open source hub for datasets and data science projects.
The style is that of J.R.R. Tolkien, which is fantasy-esque with vivid and complex descriptions as well as being poetic and medieval.
## Downstream Uses
This model can be used for fans of Tolkien's work for entertainment purposes.
## Recommended Usage
The recommended usage of this model is with Kobold AI Colab.
Click one of the links below and where you are prompted to select a **Model:** there will be a drop down menu. Type "ShaneEP77/tolkientexts" into that drop down menu and select that model. A clickable link will load for you to click on, and from there you can either enter text right away, or you can toggle to "New Game/Story" and the options "Blank Game/Story" and "Random Game/Story" are available.
Links to the GPU and TPU version can be found below:
1. **GPU**: https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb
2. **TPU**: https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('ShaneEP77/tolkientexts')
tokenizer = AutoTokenizer.from_pretrained('ShaneEP77/tolkientexts')
prompt = '''In the deep twilight of the Shire, beneath a sky adorned with a tapestry of shimmering stars, Bilbo Baggins embarked on a journey with an old friend, Gandalf.'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
ouput = model.generate(input_ids, do_sample = True, temperature = 0.8, top_p=0.85, top_k = 50, typical_p = 0.9, repition_penalty=1.5, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
## tolkientexts
This model is a fine-tuned version of **EleutherAI/pythia-2.8b-deduped** (https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on **CoreWeave's** infrastructure (https://www.coreweave.com/).
**The books that the model was trained on include the following novels all written by J.R.R. Tolkien, which made up 1.48MiB of text:**
* "The Hobbit"
* "The Lord of the Rings: The Fellowship of the Ring"
* "The Lord of the Rings: The Two Towers"
* "The Lord of the Rings: The Return of the King"
**Epochs:** 1
**Steps:** 500
## loss and accuracy
Runs of the model were logged with Weights and Biases (https://wandb.ai/site). Charts were created based on 10-20 runs of the model and show a downward trend for loss as the number of steps increase. On the other hand, there appears to be an upward trend for accuracy as the number of steps increases.


## Meet the Team and Acknowledgements!
* Shane Epstein-Petrullo - Author
* CoreWeave- Computation Materials
*A huge thanks goes out to Wes Brown, David Finster, and Rex Wang for help with this project!*
*Referencing CoreWeave's tutorial and finetuner doc was pivotal to this project. This document can be found at (https://docs.coreweave.com/~/changes/UdikeGislByaE9hH8a7T/machine-learning-and-ai/training/fine-tuning/finetuning-machine-learning-models).*
| 3,766 | [
[
-0.0307464599609375,
-0.05511474609375,
0.006134033203125,
-0.01209259033203125,
-0.007694244384765625,
-0.00727081298828125,
-0.00675201416015625,
-0.04608154296875,
-0.007747650146484375,
0.03369140625,
-0.02923583984375,
-0.0277557373046875,
-0.04486083984375... |
yasinbastug/comment_toxicity_model | 2023-06-12T15:27:04.000Z | [
"keras",
"has_space",
"region:us"
] | null | yasinbastug | null | null | yasinbastug/comment_toxicity_model | 0 | 2 | keras | 2023-06-12T15:26:46 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
huggingtweets/goddessalexaxox | 2023-06-12T16:47:03.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/goddessalexaxox | 0 | 2 | transformers | 2023-06-12T16:46:57 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1641218483873628161/O-rhkH7G_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Goddess Alexa ✨💖</div>
<div style="text-align: center; font-size: 14px;">@goddessalexaxox</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Goddess Alexa ✨💖.
| Data | Goddess Alexa ✨💖 |
| --- | --- |
| Tweets downloaded | 839 |
| Retweets | 34 |
| Short tweets | 81 |
| Tweets kept | 724 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fsdx8o80/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goddessalexaxox's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zokgkfo9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zokgkfo9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goddessalexaxox')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,523 | [
[
-0.0212860107421875,
-0.06500244140625,
0.026947021484375,
0.011322021484375,
-0.0222930908203125,
0.01079559326171875,
-0.000946044921875,
-0.037017822265625,
0.026458740234375,
0.007537841796875,
-0.07464599609375,
-0.03271484375,
-0.050079345703125,
-0.00... |
huggingtweets/sainte_caramel | 2023-06-12T17:13:07.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/sainte_caramel | 0 | 2 | transformers | 2023-06-12T17:12:57 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1666742073888833539/rZoz5ZIm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𝐂𝐚𝐫𝐚𝐦𝐞𝐥 🧡 2DFD</div>
<div style="text-align: center; font-size: 14px;">@sainte_caramel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𝐂𝐚𝐫𝐚𝐦𝐞𝐥 🧡 2DFD.
| Data | 𝐂𝐚𝐫𝐚𝐦𝐞𝐥 🧡 2DFD |
| --- | --- |
| Tweets downloaded | 2808 |
| Retweets | 478 |
| Short tweets | 303 |
| Tweets kept | 2027 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lskhbqwr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sainte_caramel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x5m101wv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x5m101wv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sainte_caramel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,518 | [
[
-0.0224151611328125,
-0.0640869140625,
0.0248260498046875,
0.02020263671875,
-0.01824951171875,
0.00909423828125,
-0.006130218505859375,
-0.035888671875,
0.0261688232421875,
0.01027679443359375,
-0.07147216796875,
-0.03472900390625,
-0.0494384765625,
-0.0106... |
UTibetNLP/cino-base-v2_TNCC-title_tsheg | 2023-07-23T13:33:27.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | UTibetNLP | null | null | UTibetNLP/cino-base-v2_TNCC-title_tsheg | 0 | 2 | transformers | 2023-06-13T03:16:38 | ---
license: apache-2.0
---
## Introduction
This is a fine-tuned LM in our paper below and the related GitHub repo is [here](https://github.com/metaphors/TibetanPLMsFineTuning).
***[Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script](https://aclanthology.org/2023.trustnlp-1.4) (Cao et al., ACL 2023 Workshop - TrustNLP)***
## Citation
If you think our work useful, please kindly cite our paper.
```
@inproceedings{cao-etal-2023-pay-attention,
title = "Pay Attention to the Robustness of {C}hinese Minority Language Models! Syllable-level Textual Adversarial Attack on {T}ibetan Script",
author = "Cao, Xi and
Dawa, Dolma and
Qun, Nuo and
Nyima, Trashi",
booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.trustnlp-1.4",
pages = "35--46"
}
``` | 1,090 | [
[
-0.0067291259765625,
-0.048736572265625,
0.0098724365234375,
0.02471923828125,
-0.0287322998046875,
-0.00913238525390625,
-0.038970947265625,
-0.029937744140625,
0.0016012191772460938,
0.026397705078125,
-0.0184173583984375,
-0.049072265625,
-0.04498291015625,
... |
mamun4105/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-13T07:31:22.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mamun4105 | null | null | mamun4105/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-13T06:10:32 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 640.00 +/- 199.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mamun4105 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mamun4105 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mamun4105
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,762 | [
[
-0.043731689453125,
-0.039642333984375,
0.0194854736328125,
0.0245819091796875,
-0.01123046875,
-0.0178985595703125,
0.010345458984375,
-0.0125885009765625,
0.01280975341796875,
0.0226593017578125,
-0.072265625,
-0.035125732421875,
-0.025665283203125,
-0.004... |
intanm/fewshot-qa-002-20230613-003 | 2023-06-13T07:30:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | intanm | null | null | intanm/fewshot-qa-002-20230613-003 | 0 | 2 | transformers | 2023-06-13T07:11:07 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: fewshot-qa-002-20230613-003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot-qa-002-20230613-003
This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 208 | 2.5896 |
| No log | 2.0 | 416 | 2.6143 |
| 2.487 | 3.0 | 624 | 2.7156 |
| 2.487 | 4.0 | 832 | 3.1187 |
| 1.2936 | 5.0 | 1040 | 3.3531 |
| 1.2936 | 6.0 | 1248 | 3.7272 |
| 1.2936 | 7.0 | 1456 | 3.9238 |
| 0.6852 | 8.0 | 1664 | 4.3116 |
| 0.6852 | 9.0 | 1872 | 4.3842 |
| 0.3944 | 10.0 | 2080 | 4.3842 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,761 | [
[
-0.0308685302734375,
-0.03546142578125,
0.01751708984375,
-0.00276947021484375,
-0.0173187255859375,
-0.02130126953125,
-0.0034351348876953125,
-0.01593017578125,
0.003131866455078125,
0.028594970703125,
-0.0689697265625,
-0.04681396484375,
-0.050872802734375,
... |
addy88/bert-finetuned-bpmn | 2023-06-13T09:15:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | addy88 | null | null | addy88/bert-finetuned-bpmn | 0 | 2 | transformers | 2023-06-13T09:06:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-bpmn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-bpmn
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Precision: 0.8113
- Recall: 0.86
- F1: 0.8350
- Accuracy: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.2716 | 0.7778 | 0.84 | 0.8077 | 0.9115 |
| No log | 2.0 | 20 | 0.2428 | 0.7669 | 0.8333 | 0.7987 | 0.9160 |
| No log | 3.0 | 30 | 0.2726 | 0.7875 | 0.84 | 0.8129 | 0.9205 |
| No log | 4.0 | 40 | 0.2658 | 0.7862 | 0.8333 | 0.8091 | 0.9214 |
| No log | 5.0 | 50 | 0.2470 | 0.7914 | 0.86 | 0.8243 | 0.9268 |
| No log | 6.0 | 60 | 0.2745 | 0.7791 | 0.8467 | 0.8115 | 0.9250 |
| No log | 7.0 | 70 | 0.3415 | 0.8280 | 0.8667 | 0.8469 | 0.9259 |
| No log | 8.0 | 80 | 0.3524 | 0.775 | 0.8267 | 0.8000 | 0.9178 |
| No log | 9.0 | 90 | 0.3307 | 0.8313 | 0.8867 | 0.8581 | 0.9322 |
| No log | 10.0 | 100 | 0.3161 | 0.7778 | 0.84 | 0.8077 | 0.9214 |
| No log | 11.0 | 110 | 0.3646 | 0.8387 | 0.8667 | 0.8525 | 0.9322 |
| No log | 12.0 | 120 | 0.3262 | 0.7925 | 0.84 | 0.8155 | 0.9223 |
| No log | 13.0 | 130 | 0.3436 | 0.8462 | 0.88 | 0.8627 | 0.9350 |
| No log | 14.0 | 140 | 0.3427 | 0.8516 | 0.88 | 0.8656 | 0.9377 |
| No log | 15.0 | 150 | 0.3163 | 0.7950 | 0.8533 | 0.8232 | 0.9322 |
| No log | 16.0 | 160 | 0.3233 | 0.8291 | 0.8733 | 0.8506 | 0.9377 |
| No log | 17.0 | 170 | 0.3354 | 0.8050 | 0.8533 | 0.8285 | 0.9322 |
| No log | 18.0 | 180 | 0.3468 | 0.8291 | 0.8733 | 0.8506 | 0.9341 |
| No log | 19.0 | 190 | 0.3457 | 0.8176 | 0.8667 | 0.8414 | 0.9341 |
| No log | 20.0 | 200 | 0.3456 | 0.8113 | 0.86 | 0.8350 | 0.9341 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,239 | [
[
-0.045318603515625,
-0.049224853515625,
0.0120086669921875,
0.006549835205078125,
-0.01085662841796875,
-0.00942230224609375,
-0.0014934539794921875,
-0.004978179931640625,
0.043548583984375,
0.02716064453125,
-0.05438232421875,
-0.056884765625,
-0.0488586425781... |
ml-projects/clickbait-ml_model-one | 2023-06-13T11:36:59.000Z | [
"keras",
"text-classification",
"de",
"dataset:ml-projects/clickbait-ml_dataset",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | text-classification | ml-projects | null | null | ml-projects/clickbait-ml_model-one | 0 | 2 | keras | 2023-06-13T11:15:45 | ---
license: openrail
language:
- de
metrics:
- accuracy
library_name: keras
pipeline_tag: text-classification
datasets:
- ml-projects/clickbait-ml_dataset
---
# Clickbait-ML Model One
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | 5,319 | [
[
-0.045745849609375,
-0.049530029296875,
0.0303497314453125,
0.0099029541015625,
-0.02264404296875,
-0.032745361328125,
0.01183319091796875,
-0.03851318359375,
0.01479339599609375,
0.05596923828125,
-0.05645751953125,
-0.049774169921875,
-0.04656982421875,
-0... |
Eitanli/resume_label_summary_model | 2023-06-14T06:35:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | Eitanli | null | null | Eitanli/resume_label_summary_model | 0 | 2 | transformers | 2023-06-13T11:51:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: resume_label_summary_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resume_label_summary_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9802
- Rouge1: 0.3129
- Rouge2: 0.191
- Rougel: 0.3133
- Rougelsum: 0.3126
- Gen Len: 15.4611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 49 | 2.6885 | 0.209 | 0.0961 | 0.21 | 0.2094 | 18.4456 |
| No log | 2.0 | 98 | 1.9802 | 0.3129 | 0.191 | 0.3133 | 0.3126 | 15.4611 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,601 | [
[
-0.0283050537109375,
-0.0311126708984375,
0.0174560546875,
0.009429931640625,
-0.0219573974609375,
-0.035858154296875,
-0.0075531005859375,
-0.01617431640625,
0.01158905029296875,
0.028076171875,
-0.052490234375,
-0.055816650390625,
-0.047149658203125,
-0.00... |
SHENMU007/neunit_BASE_V9.1 | 2023-06-20T09:47:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_BASE_V9.1 | 0 | 2 | transformers | 2023-06-14T02:03:33 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,246 | [
[
-0.034637451171875,
-0.05291748046875,
-0.003765106201171875,
0.01136016845703125,
-0.0254364013671875,
-0.020538330078125,
-0.0173797607421875,
-0.027374267578125,
0.0108184814453125,
0.0207366943359375,
-0.0419921875,
-0.05059814453125,
-0.042449951171875,
... |
zayedupal/movie-genre-prediction_distilbert-base-uncased | 2023-09-12T00:59:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | zayedupal | null | null | zayedupal/movie-genre-prediction_distilbert-base-uncased | 2 | 2 | transformers | 2023-06-14T06:44:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: movie-genre-prediction_distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie-genre-prediction_distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5869
- Accuracy: 0.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 200
- eval_batch_size: 200
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 163 | 1.6259 | 0.4358 |
| No log | 2.0 | 326 | 1.5705 | 0.4478 |
| No log | 3.0 | 489 | 1.5566 | 0.4527 |
| 1.6342 | 4.0 | 652 | 1.5656 | 0.4537 |
| 1.6342 | 5.0 | 815 | 1.5721 | 0.4504 |
| 1.6342 | 6.0 | 978 | 1.5825 | 0.4491 |
| 1.3404 | 7.0 | 1141 | 1.5869 | 0.4489 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,806 | [
[
-0.025848388671875,
-0.04107666015625,
0.01202392578125,
0.002223968505859375,
-0.02459716796875,
-0.0031795501708984375,
0.005901336669921875,
-0.0025348663330078125,
0.0166473388671875,
0.026458740234375,
-0.05780029296875,
-0.04852294921875,
-0.063232421875,
... |
hitachi-nlp/bert-base-japanese_jumanpp-wordpiece | 2023-06-16T01:03:10.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | hitachi-nlp | null | null | hitachi-nlp/bert-base-japanese_jumanpp-wordpiece | 0 | 2 | transformers | 2023-06-14T07:02:52 | ---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Juman++ + WordPiece)
===
## How to load the tokenizer
Please download the dictionary file for Juman++ + WordPiece from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/jumanpp_wordpiece.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
from pyknp import Juman
import mojimoji
import textspan
class JumanPreTokenizer:
def __init__(self):
self.juman = Juman("jumanpp", multithreading=True)
def tokenize(self, sequence: str) -> list[str]:
text = mojimoji.han_to_zen(sequence).rstrip()
try:
result = self.juman.analysis(text)
except:
traceback.print_exc()
text = ""
result = self.juman.analysis(text)
return [mrph.midasi for mrph in result.mrph_list()]
def custom_split(self, i: int, normalized_string: NormalizedString) -> list[NormalizedString]:
text = str(normalized_string)
tokens = self.tokenize(text)
tokens_spans = textspan.get_original_spans(tokens, text)
return [normalized_string[st:ed] for cahr_spans in tokens_spans for st,ed in cahr_spans]
def pre_tokenize(self, pretok: PreTokenizedString):
pretok.split(self.custom_split)
# load a pre-tokenizer
pre_tokenizer = JumanPreTokenizer()
# load a tokenizer
dict_path = /path/to/jumanpp_wordpiece.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
# set a pre-tokenizer
tokenizer._tokenizer.pre_tokenizer = PreTokenizer.custom(pre_tokenizer)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こ','##んに','##ち','##は','。','私','は','形態','素','解析','器','に','ついて','研究','を','して','い','ます','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_jumanpp-wordpiece")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
| 2,855 | [
[
-0.0208282470703125,
-0.0458984375,
0.008758544921875,
0.029022216796875,
-0.0550537109375,
0.01202392578125,
-0.0298614501953125,
0.0017986297607421875,
0.032470703125,
0.02667236328125,
-0.04254150390625,
-0.0269012451171875,
-0.054473876953125,
0.03012084... |
TheBloke/minotaur-13B-fixed-GPTQ | 2023-09-27T12:44:37.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"OpenAccess AI Collective",
"MPT",
"axolotl",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"datas... | text-generation | TheBloke | null | null | TheBloke/minotaur-13B-fixed-GPTQ | 5 | 2 | transformers | 2023-06-14T07:58:58 | ---
license: other
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
model_name: Minotaur 13B Fixed
base_model: openaccess-ai-collective/minotaur-13b-fixed
inference: false
model_creator: Open Access AI Collective
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Minotaur 13B Fixed - GPTQ
- Model creator: [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective)
- Original model: [Minotaur 13B Fixed](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed)
<!-- description start -->
## Description
This repo contains GPTQ model files for [OpenAccess AI Collective's Minotaur 13B Fixed](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/minotaur-13B-fixed-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF)
* [Open Access AI Collective's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.45 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/minotaur-13B-fixed-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/minotaur-13B-fixed-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/minotaur-13B-fixed-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `minotaur-13B-fixed-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/minotaur-13B-fixed-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: OpenAccess AI Collective's Minotaur 13B Fixed
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Due to a bug, the initial release of Minotaur 13B dropped a few datasets during training. We have corrected the issue and this is the retrained model
The affected datasets include:
- prose generation
- classification
- coding
# Minotaur 13B (FIXED)
Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 13B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-13b/runs/5ystr7w6/workspace
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB
- 1 epochs taking approximately 7.5 hours
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b-fixed), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4778|_ |0.0146|
| | |acc_norm|0.4932|_ |0.0146|
|arc_easy | 0|acc |0.7786|_ |0.0085|
| | |acc_norm|0.7487|_ |0.0089|
|boolq | 1|acc |0.8141|_ |0.0068|
|hellaswag | 0|acc |0.6072|_ |0.0049|
| | |acc_norm|0.7929|_ |0.0040|
|openbookqa | 0|acc |0.3420|_ |0.0212|
| | |acc_norm|0.4580|_ |0.0223|
|piqa | 0|acc |0.8025|_ |0.0093|
| | |acc_norm|0.8107|_ |0.0091|
|winogrande | 0|acc |0.7285|_ |0.0125|
## Examples - results may vary based on temperature (0.7 for this) and other settings
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps:
1. Subtract 3x from both sides of the equation:
5x - 3x = 3x + 10 - 3x
2x = 10
2. Divide both sides of the equation by 2:
2x/2 = 10/2
x = 5
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: You have 3 apples. You eat 2 pears. How many apples do you have left?
ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: List three words that start with "ex". What is 33 + 77?
ASSISTANT:Three words that start with "ex" are "express", "exact", and "excite". 33 + 77 = 110.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Write a haiku about dinosaurs.
ASSISTANT:Titanic creatures roam,
Thunderous footsteps echo,
Extinct but not forgotten.
```
meh, 6/7/7 rather than 5/7/5
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Create an original short story in the style of J.K Rowling about the Weesley family going on a vacation in Australia whith Harry Potter and Hermione Grainger tagging along.
ASSISTANT:Once upon a time, there was a magical family named the Weesleys. They lived in a cozy little cottage at the edge of Hogsmeade village, where they ran a charming bed and breakfast called "The Flying Ford Anglia." A
s any other magical family, they enjoyed adventures that brought them close to nature, so they decided to go on a trip to Australia. They invited their dear friends Harry Potter and Hermione Grainger to join them on this ex
citing journey.
The Weesleys had been planning for months and had everything ready for their trip. Their children, Freddie and Roxanne were thrilled to visit the land down under, and their parents, George and Angelina, could not wait to ex
plore the beautiful country with their friends. They would be leaving soon, but before departing, there was one small problem - the luggage.
"We can't possibly take all these trunks!" exclaimed Freddie as he stared at the mountain of bags stacked up in front of him.
His father looked at him seriously. "You see, son," he began, "our luggage is crucial because we have to pack all our magical essentials like wand holsters, extra wand chargers, spell books, potions ingredients, and much mo
re."
Roxanne interrupted, "But daddy, we don't even use magic in everyday life! And what if it rains? We'll look weird carrying umbrellas around!"
Her mother joined in. "My dear, you know how unpredictable the weather can be. We must always be prepared for unexpected circumstances. Don't forget that even Muggles carry umbrellas."
After hours of debating, they finally managed to narrow down their luggage list and fit all the necessary supplies into several large trunks. The day had come; they were ready to leave for their grand adventure!
As the Weesleys boarded the Portkey that would transport them across the world, their wands began to glow softly, indicating that they had enough energy to make the journey. The Portkey dropped them off in Sydney, right in
front of the magnificent Opera House.
They spent the first few days exploring the city, marveling at the iconic architecture and tasting local cuisine. Then, as planned, they headed north to visit the Great Barrier Reef, one of the most famous natural wonders o
f the world.
Harry and Hermione joined them during this leg of the trip, which made it even more enjoyable. Harry regaled them with tales of his own travels while Hermione shared her extensive knowledge of plants, animals, and the envir
onment.
Soon, they arrived at a quaint town nestled among vibrant green hills and surrounded by vast cattle farms. It was here that they would learn about Aboriginal culture and see some truly unique sights.
One morning, after enjoying a hearty breakfast, they set out to explore the local area. They visited a nearby art gallery that showcased amazing Indigenous works of art, including traditional paintings, sculptures, and text
iles. Afterward, they attended a didgeridoo concert given by a talented young musician who captivated everyone with his soulful tunes.
The following day, they embarked on a bushwalk through the rainforest trails. The air was cool and fresh, and the towering trees seemed to reach for the sky. Hermione took great pleasure in identifying various species of pl
ants and explaining their medicinal properties.
As they walked deeper into the forest, they heard faint chattering noises coming from high above. To their surprise, they spotted a group of small, furry creatures perched on a tree branch. They turned out to be native Aust
ralian possums, which delighted the children, especially Freddie, who could communicate with them using sign language.
Later that afternoon, they visited Uluru, also known as Ayers Rock, a massive sandstone monolith located in the heart of the Outback. The sight of its magnificent red hues against the blue sky left them all speechless. They
spent the rest of the day exploring the rock formation, taking in the breathtaking views and learning about its significance to the Anangu people.
Their last destination was Queensland, home to the world-famous Great Barrier Reef. They spent two days snorkeling, diving, and swimming among the colorful coral reefs, watching schools of fish, sea turtles, and dolphins sw
im past them.
As the sun began to set over the crystal-clear waters, Harry spoke up. "I think it's time we return to reality, don't you agree?" he said, looking at his wife. "This trip has been fantastic, but we can't stay forever."
Everyone nodded in agreement as they prepared themselves for their final journey back to Sydney. When they got there, they gathered on the balcony of their hotel room, reminiscing about the incredible memories they had crea
ted together.
"Thank you for inviting us along on your adventure, Weesleys," said Harry. "It was an honor and a privilege to share this experience with such kind and generous friends. I hope we can do it again sometime."
With tears in their eyes and heavy hearts, they bid each other farewell, promising to keep in touch and meet again soon. And so, their epic journey came to an end. But the memories would remain with them forever, reminding
them of the power of friendship, the beauty of nature, and the importance of discovering new worlds.
```
| 27,458 | [
[
-0.043487548828125,
-0.056488037109375,
0.004573822021484375,
0.0107269287109375,
-0.0173797607421875,
-0.00846099853515625,
0.005100250244140625,
-0.042694091796875,
0.0177154541015625,
0.024261474609375,
-0.049896240234375,
-0.035980224609375,
-0.0262451171875... |
Falah/falahgs_school_qa_model | 2023-06-14T11:41:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | Falah | null | null | Falah/falahgs_school_qa_model | 0 | 2 | transformers | 2023-06-14T11:25:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: falahgs_school_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falahgs_school_qa_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.7652 |
| 2.2397 | 2.0 | 500 | 1.4334 |
| 2.2397 | 3.0 | 750 | 1.4254 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,386 | [
[
-0.030426025390625,
-0.048919677734375,
0.0069580078125,
0.01457977294921875,
-0.021636962890625,
-0.0178375244140625,
-0.0019893646240234375,
-0.0083160400390625,
-0.00049591064453125,
0.023529052734375,
-0.07391357421875,
-0.0474853515625,
-0.043426513671875,
... |
TheBloke/airoboros-33B-gpt4-1.2-GPTQ | 2023-08-21T08:40:51.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/airoboros-33B-gpt4-1.2-GPTQ | 9 | 2 | transformers | 2023-06-14T13:07:17 | ---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# John Durbin's Airoboros 33B GPT4 1.2 GPTQ
These files are GPTQ 4bit model files for [John Durbin's Airoboros 33B GPT4 1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2).
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GGML)
* [Jon Durbin's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-33B-gpt4-1.2-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `airoboros-33B-gpt4-1.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/airoboros-33B-gpt4-1.2-GPTQ"
model_basename = "airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
**airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors**
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
* `airoboros-33b-gpt4-1.2-GPTQ-4bit--1g.act.order.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: John Durbin's Airoboros 33B GPT4 1.2
### Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.1) with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
| 11,007 | [
[
-0.038818359375,
-0.051971435546875,
0.0140838623046875,
0.002925872802734375,
-0.021453857421875,
0.0003533363342285156,
0.0098114013671875,
-0.03204345703125,
0.01511383056640625,
0.0075836181640625,
-0.04266357421875,
-0.0299072265625,
-0.03106689453125,
... |
MattiaSangermano/bert-political-leaning-it | 2023-06-25T21:07:03.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"twitter",
"political-leaning",
"politics",
"it",
"dataset:politic-it",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | MattiaSangermano | null | null | MattiaSangermano/bert-political-leaning-it | 2 | 2 | transformers | 2023-06-14T14:13:25 | ---
language:
- it
tags:
- twitter
- political-leaning
- politics
datasets:
- politic-it
widget:
- text: >-
È necessario garantire salari dignitosi e condizioni di lavoro adeguate per
tutelare i diritti dei lavoratori
example_title: Left-wing example
- text: >-
L'immigrazione deve essere gestita con rigore per preservare l'identità
nazionale!
example_title: Right-wing example
model-index:
- name: bert-political-leaning-it
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: social media
name: politic-it
metrics:
- type: f1 macro
value: 61.3
- type: accuracy
value: 69.4
license: apache-2.0
metrics:
- f1
- accuracy
pipeline_tag: text-classification
---
# MattiaSangermano/bert-political-leaning-it
This model categorizes the political leaning of an Italian sentence into 4 categories: `moderate_left`, `left`, `right`, `moderate_right`. The model is a fine-tuned version of [neuraly/bert-base-italian-cased-sentiment](https://huggingface.co/neuraly/bert-base-italian-cased-sentiment).
- **Developed by:** [Mattia Sangermano](https://www.linkedin.com/in/mattia-sangermano/) and [Fabio Murgese](https://www.linkedin.com/in/fabio-murgese/)
- **Model type:** Bert
- **Language(s) (NLP):** it
- **License:** Apache 2.0
### How to Get Started with the Model
You can use this model directly with a pipeline for text classification:
``` python
from transformers import pipeline
classifier = pipeline("text-classification",model='MattiaSangermano/bert-political-leaning-it')
prediction = classifier("Sovranità nazionale e identità forte")
print(prediction)
```
Here is how to use this model to classify a text in PyTorch:
``` python
from transformers import BertForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained('MattiaSangermano/bert-political-leaning-it')
model = BertForSequenceClassification.from_pretrained('MattiaSangermano/bert-political-leaning-it')
tokens = tokenizer("Uguaglianza e giustizia sociale", return_tensors='pt')
logits = model(**tokens)[0]
prediction = model.config.id2label[torch.argmax(logits).item()]
print(prediction)
```
and in TensorFlow:
``` python
from transformers import AutoTokenizer, TFBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained('MattiaSangermano/bert-political-leaning-it')
model = TFBertForSequenceClassification.from_pretrained('MattiaSangermano/bert-political-leaning-it')
tokens = tokenizer("Ambiente sano, futuro sicuro", padding=True,truncation=True,return_tensors='tf')
logits = model(tokens)[0]
prediction = model.config.id2label[tf.argmax(logits,1)[0].numpy()]
print(prediction)
```
### Out-of-Scope Use
It is important to recognize that political leaning is a personal and complex aspect of an individual's identity and attempting to classify it can be considered unethical and raise significant concerns. Therefore, the model should not be used to identify or classify the political orientation of individual users, nor should it be used for unethical purposes.
## Bias, Risks, and Limitations
During the construction of the dataset, deliberate efforts were made to exclude the names of politicians and political parties. As a result, these specific names might not hold relevance to the model.
## Dataset
We trained the model using the [PoliticIT](https://codalab.lisn.upsaclay.fr/competitions/8507#learn_the_details) competition dataset. The dataset was collected during 2020 and 2022 from the Twitter accounts of Italian politicians. These users were selected because their political affiliation can be guessed according to the party to which politicians belong to. The goal of the task was to classify a cluster of tweets, where a cluster is composed of texts written by different users that share the user self-assigned gender and the political ideology.
### Preprocessing
According to PoliticIT mantainers, from the dataset were discarded those tweets that contain mentions to news sites or some linguistic clues, such as the pipe symbol, which is used commonly by news sites to categorise their news. Moreover, the Twitter mentions were anonymised by replacing them with the token @user. Therefore the text traits cannot be guessed trivially by reading polititian's name and searching information on them on the Internet. Overall, the dataset consists of 103840 tweets.
#### Training Procedure
The Dataset was split into train and validation sets with a stratified split having a ratio of 80-20. Although the main task of the original competition was to classify clusters of tweets this model was trained to predict only the political leaning of individual tweets.
### Training Hyperparameters
- *Optimizer*: **Adam** with learning rate of **4e-5**, epsilon of **1e-7**
- *Loss*: **Categorical Cross Entropy** using **balanced** class weights
- *Max epochs*: **10**
- *Batch size*: **64**
- *Early Stopping*: monitoring validation loss with patience = **3**
- *Training regime*: fp16 mixed precision
## Evaluation
- test **f1-macro**: 61.3
- test **accuracy**: 69.4
| Avg Type | Precision | Recall | F1-score | Accuracy |
| ------ | ------ | ------ | ------ | ------ |
| Macro | 0.67 | 0.61 | 0.61 | - |
| Weighted | 0.74 | 0.69 | 0.77 | 0.69 | | 5,328 | [
[
-0.0236358642578125,
-0.035552978515625,
0.0181121826171875,
0.0096893310546875,
-0.03863525390625,
0.01666259765625,
-0.030914306640625,
-0.0034275054931640625,
0.01788330078125,
0.014862060546875,
-0.03607177734375,
-0.056640625,
-0.06396484375,
-0.0020351... |
ethzanalytics/RedPajama-INCITE-7B-Base-sharded-bf16 | 2023-06-16T09:39:26.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sharded",
"bf16",
"base",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | ethzanalytics | null | null | ethzanalytics/RedPajama-INCITE-7B-Base-sharded-bf16 | 0 | 2 | transformers | 2023-06-14T15:19:34 | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
pipeline_tag: text-generation
tags:
- sharded
- bf16
- base
---
# RedPajama-INCITE-7B-Base-sharded-bf16
This is the `togethercomputer/RedPajama-INCITE-7B-Base` model, but the model file(s) have been sharded to ~2GB each to ensure it can be loaded on low-RAM runtimes (like Colab).
Please refer to the [original model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base) for all details/issues w.r.t. to this model.
- inference examples are also available on the original model card linked above.
- [example colab notebook](https://colab.research.google.com/gist/pszemraj/d2f73ceaf4939bc55bdf5692cd0302d6/scratchpad.ipynb) covering the basics
---
| 757 | [
[
-0.0455322265625,
-0.0777587890625,
0.020751953125,
0.038360595703125,
-0.029754638671875,
-0.0140228271484375,
0.00531005859375,
-0.041168212890625,
0.0227813720703125,
0.072998046875,
-0.0426025390625,
-0.0101165771484375,
-0.0430908203125,
-0.029403686523... |
aymericb/en_pipeline | 2023-06-14T15:22:52.000Z | [
"spacy",
"token-classification",
"text-classification",
"en",
"model-index",
"region:us"
] | text-classification | aymericb | null | null | aymericb/en_pipeline | 0 | 2 | spacy | 2023-06-14T15:22:34 | ---
tags:
- spacy
- token-classification
- text-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.0
- name: NER Recall
type: recall
value: 0.0
- name: NER F Score
type: f_score
value: 0.0
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.0
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.0
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.0
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner`, `textcat_multilabel` |
| **Components** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner`, `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (114 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
| **`textcat_multilabel`** | `Keep` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 0.00 |
| `DEP_UAS` | 0.00 |
| `DEP_LAS` | 0.00 |
| `DEP_LAS_PER_TYPE` | 0.00 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `SENTS_F` | 100.00 |
| `LEMMA_ACC` | 0.00 |
| `ENTS_F` | 0.00 |
| `ENTS_P` | 0.00 |
| `ENTS_R` | 0.00 |
| `ENTS_PER_TYPE` | 0.00 |
| `CATS_SCORE` | 91.25 |
| `CATS_MICRO_P` | 93.94 |
| `CATS_MICRO_R` | 97.89 |
| `CATS_MICRO_F` | 95.88 |
| `CATS_MACRO_P` | 93.94 |
| `CATS_MACRO_R` | 97.89 |
| `CATS_MACRO_F` | 95.88 |
| `CATS_MACRO_AUC` | 91.25 |
| `TOK2VEC_LOSS` | 0.00 |
| `TEXTCAT_MULTILABEL_LOSS` | 15.98 | | 3,372 | [
[
-0.04345703125,
-0.0198516845703125,
0.01727294921875,
0.0264739990234375,
-0.0258331298828125,
0.0185546875,
-0.00251007080078125,
-0.01666259765625,
0.038116455078125,
0.0300750732421875,
-0.0548095703125,
-0.064697265625,
-0.05828857421875,
0.019577026367... |
JsBetancourt/gpt-chistes | 2023-06-14T16:30:28.000Z | [
"peft",
"pytorch",
"tensorboard",
"generated_from_trainer",
"text-generation",
"region:us"
] | text-generation | JsBetancourt | null | null | JsBetancourt/gpt-chistes | 0 | 2 | peft | 2023-06-14T15:42:16 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt-chistes
results: []
library_name: peft
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-chistes
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,212 | [
[
-0.034881591796875,
-0.060302734375,
0.0105133056640625,
0.01239776611328125,
-0.034759521484375,
-0.0224456787109375,
-0.008026123046875,
-0.03985595703125,
0.0215911865234375,
0.010711669921875,
-0.044189453125,
-0.032135009765625,
-0.053253173828125,
0.00... |
hongyin/awareness-en-zh-bilingual-1.4b | 2023-09-20T07:56:21.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"en",
"zh",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | hongyin | null | null | hongyin/awareness-en-zh-bilingual-1.4b | 1 | 2 | transformers | 2023-06-15T03:23:16 | ---
language:
- en
- zh
pipeline_tag: text-generation
---
## hongyin/awareness-en-zh-bilingual-1.4b
This is a English-Chinese bilingual autoregressive language model based on Bloom with a parameter size of 1.4b. The model is trained based on the NTP task on large-scale unstructured text. It is worth noting that the model is suitable for further training as an initialization parameter, and does not have the ability to be used as a chatbot directly unless it is fine-tuned on a multi-round dialogue corpus. This model is a basic model, used as a raw material for alchemy.
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
--- | 919 | [
[
-0.0003159046173095703,
-0.054931640625,
0.00859832763671875,
0.048858642578125,
-0.00044345855712890625,
-0.023468017578125,
-0.0104522705078125,
-0.049896240234375,
0.0309295654296875,
0.03948974609375,
-0.0384521484375,
-0.0265350341796875,
-0.036163330078125... |
soddokayo/klue-roberta-large-klue-2016klp-crime-2-ner | 2023-06-15T05:42:19.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | soddokayo | null | null | soddokayo/klue-roberta-large-klue-2016klp-crime-2-ner | 0 | 2 | transformers | 2023-06-15T05:35:12 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-2016klp-crime-2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-2016klp-crime-2-ner
This model is a fine-tuned version of [soddokayo/klue-roberta-large-klue-2016klp-ner](https://huggingface.co/soddokayo/klue-roberta-large-klue-2016klp-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4442
- Precision: 0.1538
- Recall: 0.1124
- F1: 0.1299
- Accuracy: 0.9005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 0.6609 | 0.0 | 0.0 | 0.0 | 0.8536 |
| No log | 2.0 | 14 | 0.4892 | 0.02 | 0.0112 | 0.0144 | 0.8817 |
| No log | 3.0 | 21 | 0.4442 | 0.1538 | 0.1124 | 0.1299 | 0.9005 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,760 | [
[
-0.031280517578125,
-0.0440673828125,
0.0212249755859375,
0.00313568115234375,
-0.018951416015625,
-0.034423828125,
-0.016693115234375,
-0.02984619140625,
0.0169677734375,
0.027618408203125,
-0.048858642578125,
-0.0450439453125,
-0.055023193359375,
-0.006481... |
soddokayo/klue-roberta-large-klue-crime-2-ner | 2023-06-15T06:25:12.000Z | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | soddokayo | null | null | soddokayo/klue-roberta-large-klue-crime-2-ner | 0 | 2 | transformers | 2023-06-15T06:01:17 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: klue-roberta-large-klue-crime-2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-roberta-large-klue-crime-2-ner
This model is a fine-tuned version of [soddokayo/klue-roberta-large-klue-ner](https://huggingface.co/soddokayo/klue-roberta-large-klue-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5080
- Precision: 0.0727
- Recall: 0.0449
- F1: 0.0556
- Accuracy: 0.8794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 1.0363 | 0.0 | 0.0 | 0.0 | 0.8425 |
| No log | 2.0 | 14 | 0.8076 | 0.0 | 0.0 | 0.0 | 0.8501 |
| No log | 3.0 | 21 | 0.6843 | 0.0 | 0.0 | 0.0 | 0.8554 |
| No log | 4.0 | 28 | 0.6184 | 0.0 | 0.0 | 0.0 | 0.8595 |
| No log | 5.0 | 35 | 0.5803 | 0.0 | 0.0 | 0.0 | 0.8618 |
| No log | 6.0 | 42 | 0.5546 | 0.0 | 0.0 | 0.0 | 0.8706 |
| No log | 7.0 | 49 | 0.5338 | 0.0189 | 0.0112 | 0.0141 | 0.8741 |
| No log | 8.0 | 56 | 0.5195 | 0.0370 | 0.0225 | 0.0280 | 0.8753 |
| No log | 9.0 | 63 | 0.5111 | 0.0545 | 0.0337 | 0.0417 | 0.8770 |
| No log | 10.0 | 70 | 0.5080 | 0.0727 | 0.0449 | 0.0556 | 0.8794 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.11.0
| 2,373 | [
[
-0.034271240234375,
-0.0426025390625,
0.018096923828125,
-0.0006036758422851562,
-0.011688232421875,
-0.020050048828125,
-0.010009765625,
-0.022369384765625,
0.028656005859375,
0.024993896484375,
-0.04791259765625,
-0.050933837890625,
-0.055328369140625,
-0.... |
Falah/Mask_awesome_eli5_mlm_model | 2023-06-15T07:47:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | Falah | null | null | Falah/Mask_awesome_eli5_mlm_model | 0 | 2 | transformers | 2023-06-15T07:05:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Mask_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mask_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7361 | 1.0 | 1131 | 2.0661 |
| 1.8475 | 2.0 | 2262 | 2.0314 |
| 1.983 | 3.0 | 3393 | 2.0085 |
| 2.0677 | 4.0 | 4524 | 1.9931 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,426 | [
[
-0.0268402099609375,
-0.04547119140625,
0.0063629150390625,
0.018951416015625,
-0.0178375244140625,
-0.02532958984375,
0.0031528472900390625,
-0.0129547119140625,
0.0059051513671875,
0.0301361083984375,
-0.0567626953125,
-0.046142578125,
-0.052490234375,
-0.... |
nickmuchi/distilroberta-base-movie-genre-prediction | 2023-09-27T15:36:58.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | nickmuchi | null | null | nickmuchi/distilroberta-base-movie-genre-prediction | 0 | 2 | transformers | 2023-06-15T09:09:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilroberta-base
model-index:
- name: distilroberta-base-movie-genre-prediction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-movie-genre-prediction
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0597
- Accuracy: 0.3557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7392 | 1.0 | 1350 | 1.7042 | 0.4017 |
| 1.6085 | 2.0 | 2700 | 1.6785 | 0.4018 |
| 1.5216 | 3.0 | 4050 | 1.7163 | 0.3945 |
| 1.4188 | 4.0 | 5400 | 1.7409 | 0.3951 |
| 1.3398 | 5.0 | 6750 | 1.7966 | 0.3858 |
| 1.2612 | 6.0 | 8100 | 1.8642 | 0.3801 |
| 1.1563 | 7.0 | 9450 | 1.9256 | 0.3670 |
| 1.1113 | 8.0 | 10800 | 1.9928 | 0.3607 |
| 1.0813 | 9.0 | 12150 | 2.0328 | 0.355 |
| 1.0329 | 10.0 | 13500 | 2.0597 | 0.3557 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,980 | [
[
-0.030853271484375,
-0.044921875,
0.008636474609375,
0.01039886474609375,
-0.019622802734375,
-0.005496978759765625,
0.0050201416015625,
-0.00386810302734375,
0.0177764892578125,
0.0214691162109375,
-0.05511474609375,
-0.043701171875,
-0.06317138671875,
-0.0... |
Moses25/llama-7b-adapter | 2023-09-14T14:45:03.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:afl-3.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | Moses25 | null | null | Moses25/llama-7b-adapter | 0 | 2 | transformers | 2023-06-15T09:20:46 | ---
license: afl-3.0
---
#### training code is refer to [trainig code](https://github.com/moseshu/deeplearning/blob/main/nlp/train_qlora.sh)
lora_checkpoint path is trained by train_qlora.sh
#### alpaca and xgen-lora training refer to [code](https://github.com/moseshu/deeplearning/blob/main/llama-finetune/alpaca_lora.py) | 325 | [
[
-0.019561767578125,
-0.024688720703125,
0.025726318359375,
0.01184844970703125,
-0.0254974365234375,
0.01096343994140625,
0.0298309326171875,
-0.0278778076171875,
0.0218505859375,
0.0382080078125,
-0.074462890625,
-0.03460693359375,
-0.032562255859375,
-0.01... |
pphuc25/whisper-tiny-testing-common-voice | 2023-06-15T10:50:15.000Z | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | pphuc25 | null | null | pphuc25/whisper-tiny-testing-common-voice | 0 | 2 | transformers | 2023-06-15T10:20:51 | ---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
model-index:
- name: whisper-tiny-testing-common-voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-testing-common-voice
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0015
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3885 | 14.29 | 100 | 3.7435 |
| 1.1828 | 28.57 | 200 | 3.7666 |
| 0.773 | 42.86 | 300 | 3.8743 |
| 0.5042 | 57.14 | 400 | 3.9884 |
| 0.3311 | 71.43 | 500 | 4.0333 |
| 0.2123 | 85.71 | 600 | 4.4089 |
| 0.1247 | 100.0 | 700 | 4.5347 |
| 0.0659 | 114.29 | 800 | 4.7366 |
| 0.0257 | 128.57 | 900 | 5.0738 |
| 0.0069 | 142.86 | 1000 | 5.1334 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0a0+gitcc01568
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,922 | [
[
-0.03424072265625,
-0.045989990234375,
0.004070281982421875,
0.007480621337890625,
-0.0146636962890625,
-0.037506103515625,
-0.0169525146484375,
-0.015838623046875,
0.014373779296875,
0.0171051025390625,
-0.059051513671875,
-0.0361328125,
-0.03924560546875,
... |
erfanzar/FlaxFalcon | 2023-06-28T07:52:23.000Z | [
"transformers",
"falcon",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2306.01116",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | erfanzar | null | null | erfanzar/FlaxFalcon | 0 | 2 | transformers | 2023-06-15T13:29:06 | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# Falcon-RW-7B
**Falcon-RW-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). It is made available under the Apache 2.0 license.**
See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for more details.
# Note this repo contains The Falcon Model in the Jax/Flax and make it available to finetune and train it on free kaggle TPUS ! | 572 | [
[
-0.04400634765625,
-0.061065673828125,
0.031829833984375,
0.030303955078125,
-0.0225982666015625,
-0.0225372314453125,
0.02923583984375,
-0.049163818359375,
0.0174407958984375,
0.057830810546875,
-0.0592041015625,
-0.0198211669921875,
-0.047576904296875,
0.0... |
fx1H/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-15T13:29:54.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | fx1H | null | null | fx1H/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-15T13:29:23 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 331.50 +/- 133.44
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fx1H -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fx1H -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fx1H
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,748 | [
[
-0.04296875,
-0.039093017578125,
0.02020263671875,
0.02447509765625,
-0.01114654541015625,
-0.0169525146484375,
0.01041412353515625,
-0.0125579833984375,
0.0125579833984375,
0.0226593017578125,
-0.07269287109375,
-0.03515625,
-0.0247955322265625,
-0.00354385... |
amjadfqs/finalProject | 2023-06-16T22:28:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | amjadfqs | null | null | amjadfqs/finalProject | 0 | 2 | transformers | 2023-06-15T17:30:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: finalProject
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9890023566378633
- name: Precision
type: precision
value: 0.9894345375382527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finalProject
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9890
- F1 Score: 0.9892
- Precision: 0.9894
- Sensitivity: 0.9891
- Specificity: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 0.3384 | 1.0 | 30 | 0.2387 | 0.9144 | 0.9163 | 0.9197 | 0.9146 | 0.9781 |
| 0.1608 | 2.0 | 60 | 0.1635 | 0.9466 | 0.9476 | 0.9485 | 0.9474 | 0.9865 |
| 0.0953 | 3.0 | 90 | 0.0915 | 0.9698 | 0.9703 | 0.9706 | 0.9706 | 0.9924 |
| 0.0573 | 4.0 | 120 | 0.1125 | 0.9607 | 0.9617 | 0.9634 | 0.9621 | 0.9901 |
| 0.0335 | 5.0 | 150 | 0.0536 | 0.9827 | 0.9831 | 0.9837 | 0.9826 | 0.9957 |
| 0.0185 | 6.0 | 180 | 0.0543 | 0.9827 | 0.9830 | 0.9837 | 0.9825 | 0.9957 |
| 0.0226 | 7.0 | 210 | 0.0478 | 0.9859 | 0.9861 | 0.9866 | 0.9856 | 0.9965 |
| 0.0131 | 8.0 | 240 | 0.0468 | 0.9843 | 0.9846 | 0.9847 | 0.9846 | 0.9961 |
| 0.0087 | 9.0 | 270 | 0.0411 | 0.9890 | 0.9892 | 0.9894 | 0.9891 | 0.9972 |
| 0.0043 | 10.0 | 300 | 0.0376 | 0.9886 | 0.9888 | 0.9890 | 0.9887 | 0.9971 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
| 3,122 | [
[
-0.0416259765625,
-0.036773681640625,
0.00872039794921875,
0.0097198486328125,
-0.0057220458984375,
-0.0130462646484375,
-0.0006198883056640625,
-0.016754150390625,
0.0229339599609375,
0.01715087890625,
-0.05084228515625,
-0.05340576171875,
-0.04541015625,
-... |
grantprice/pythia-410m-deduped-finetuned-Critical-Role | 2023-06-16T16:27:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | grantprice | null | null | grantprice/pythia-410m-deduped-finetuned-Critical-Role | 0 | 2 | transformers | 2023-06-15T19:15:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: pythia-410m-deduped-finetuned-Critical-Role
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-deduped-finetuned-Critical-Role
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6559 | 1.0 | 1268 | 2.8587 |
| 2.1501 | 2.0 | 2536 | 2.8971 |
| 1.6996 | 3.0 | 3804 | 3.0807 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,439 | [
[
-0.03204345703125,
-0.03900146484375,
0.009918212890625,
-0.002777099609375,
-0.0277099609375,
-0.0307464599609375,
-0.00760650634765625,
-0.0122528076171875,
0.00598907470703125,
0.0298919677734375,
-0.0416259765625,
-0.034698486328125,
-0.042449951171875,
... |
michaelfeil/ct2fast-e5-large-v2 | 2023-10-13T13:37:36.000Z | [
"sentence-transformers",
"bert",
"ctranslate2",
"int8",
"float16",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | sentence-similarity | michaelfeil | null | null | michaelfeil/ct2fast-e5-large-v2 | 3 | 2 | sentence-transformers | 2023-06-15T20:11:32 | ---
tags:
- ctranslate2
- int8
- float16
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-large-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.22388059701493
- type: ap
value: 43.20816505595132
- type: f1
value: 73.27811303522058
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.748325
- type: ap
value: 90.72534979701297
- type: f1
value: 93.73895874282185
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.612
- type: f1
value: 47.61157345898393
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.208
- type: map_at_100
value: 39.417
- type: map_at_1000
value: 39.428999999999995
- type: map_at_3
value: 33.95
- type: map_at_5
value: 36.329
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.288
- type: mrr_at_100
value: 39.511
- type: mrr_at_1000
value: 39.523
- type: mrr_at_3
value: 34.009
- type: mrr_at_5
value: 36.434
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.417
- type: ndcg_at_100
value: 51.812000000000005
- type: ndcg_at_1000
value: 52.137
- type: ndcg_at_3
value: 37.528
- type: ndcg_at_5
value: 41.81
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.979
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 47.937000000000005
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.546499570522094
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.01607489943561
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.616107510107774
- type: mrr
value: 72.75106626214661
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.33018094733868
- type: cos_sim_spearman
value: 83.60190492611737
- type: euclidean_pearson
value: 82.1492450218961
- type: euclidean_spearman
value: 82.70308926526991
- type: manhattan_pearson
value: 81.93959600076842
- type: manhattan_spearman
value: 82.73260801016369
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.54545454545455
- type: f1
value: 84.49582530928923
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.362725540120096
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.849509608178145
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.502999999999997
- type: map_at_10
value: 43.323
- type: map_at_100
value: 44.708999999999996
- type: map_at_1000
value: 44.838
- type: map_at_3
value: 38.987
- type: map_at_5
value: 41.516999999999996
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 49.13
- type: mrr_at_100
value: 49.697
- type: mrr_at_1000
value: 49.741
- type: mrr_at_3
value: 45.804
- type: mrr_at_5
value: 47.842
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 50.266999999999996
- type: ndcg_at_100
value: 54.967
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 43.823
- type: ndcg_at_5
value: 47.12
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 10.057
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.851
- type: recall_at_1
value: 31.502999999999997
- type: recall_at_10
value: 63.715999999999994
- type: recall_at_100
value: 83.61800000000001
- type: recall_at_1000
value: 96.63199999999999
- type: recall_at_3
value: 45.403
- type: recall_at_5
value: 54.481
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 37.330999999999996
- type: map_at_100
value: 38.580999999999996
- type: map_at_1000
value: 38.708
- type: map_at_3
value: 34.713
- type: map_at_5
value: 36.104
- type: mrr_at_1
value: 35.223
- type: mrr_at_10
value: 43.419000000000004
- type: mrr_at_100
value: 44.198
- type: mrr_at_1000
value: 44.249
- type: mrr_at_3
value: 41.614000000000004
- type: mrr_at_5
value: 42.553000000000004
- type: ndcg_at_1
value: 35.223
- type: ndcg_at_10
value: 42.687999999999995
- type: ndcg_at_100
value: 47.447
- type: ndcg_at_1000
value: 49.701
- type: ndcg_at_3
value: 39.162
- type: ndcg_at_5
value: 40.557
- type: precision_at_1
value: 35.223
- type: precision_at_10
value: 7.962
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.184999999999999
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 51.881
- type: recall_at_100
value: 72.04
- type: recall_at_1000
value: 86.644
- type: recall_at_3
value: 40.778
- type: recall_at_5
value: 45.176
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.175
- type: map_at_10
value: 51.174
- type: map_at_100
value: 52.26499999999999
- type: map_at_1000
value: 52.315999999999995
- type: map_at_3
value: 47.897
- type: map_at_5
value: 49.703
- type: mrr_at_1
value: 43.448
- type: mrr_at_10
value: 54.505
- type: mrr_at_100
value: 55.216
- type: mrr_at_1000
value: 55.242000000000004
- type: mrr_at_3
value: 51.98500000000001
- type: mrr_at_5
value: 53.434000000000005
- type: ndcg_at_1
value: 43.448
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.537
- type: ndcg_at_1000
value: 62.546
- type: ndcg_at_3
value: 51.73799999999999
- type: ndcg_at_5
value: 54.324
- type: precision_at_1
value: 43.448
- type: precision_at_10
value: 9.292
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.218
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.175
- type: recall_at_10
value: 72.00999999999999
- type: recall_at_100
value: 90.155
- type: recall_at_1000
value: 97.257
- type: recall_at_3
value: 57.133
- type: recall_at_5
value: 63.424
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.405
- type: map_at_10
value: 30.043
- type: map_at_100
value: 31.191000000000003
- type: map_at_1000
value: 31.275
- type: map_at_3
value: 27.034000000000002
- type: map_at_5
value: 28.688000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.993
- type: mrr_at_100
value: 32.992
- type: mrr_at_1000
value: 33.050000000000004
- type: mrr_at_3
value: 28.964000000000002
- type: mrr_at_5
value: 30.653000000000002
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 40.709
- type: ndcg_at_1000
value: 42.855
- type: ndcg_at_3
value: 29.139
- type: ndcg_at_5
value: 32.045
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.65
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 22.405
- type: recall_at_10
value: 49.391
- type: recall_at_100
value: 74.53699999999999
- type: recall_at_1000
value: 90.605
- type: recall_at_3
value: 33.126
- type: recall_at_5
value: 40.073
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.309999999999999
- type: map_at_10
value: 20.688000000000002
- type: map_at_100
value: 22.022
- type: map_at_1000
value: 22.152
- type: map_at_3
value: 17.954
- type: map_at_5
value: 19.439
- type: mrr_at_1
value: 16.294
- type: mrr_at_10
value: 24.479
- type: mrr_at_100
value: 25.515
- type: mrr_at_1000
value: 25.593
- type: mrr_at_3
value: 21.642
- type: mrr_at_5
value: 23.189999999999998
- type: ndcg_at_1
value: 16.294
- type: ndcg_at_10
value: 25.833000000000002
- type: ndcg_at_100
value: 32.074999999999996
- type: ndcg_at_1000
value: 35.083
- type: ndcg_at_3
value: 20.493
- type: ndcg_at_5
value: 22.949
- type: precision_at_1
value: 16.294
- type: precision_at_10
value: 5.112
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.587000000000001
- type: recall_at_1
value: 13.309999999999999
- type: recall_at_10
value: 37.851
- type: recall_at_100
value: 64.835
- type: recall_at_1000
value: 86.334
- type: recall_at_3
value: 23.493
- type: recall_at_5
value: 29.528
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.857999999999997
- type: map_at_10
value: 35.503
- type: map_at_100
value: 36.957
- type: map_at_1000
value: 37.065
- type: map_at_3
value: 32.275999999999996
- type: map_at_5
value: 34.119
- type: mrr_at_1
value: 31.954
- type: mrr_at_10
value: 40.851
- type: mrr_at_100
value: 41.863
- type: mrr_at_1000
value: 41.900999999999996
- type: mrr_at_3
value: 38.129999999999995
- type: mrr_at_5
value: 39.737
- type: ndcg_at_1
value: 31.954
- type: ndcg_at_10
value: 41.343999999999994
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 49.501
- type: ndcg_at_3
value: 36.047000000000004
- type: ndcg_at_5
value: 38.639
- type: precision_at_1
value: 31.954
- type: precision_at_10
value: 7.68
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.589
- type: recall_at_1
value: 25.857999999999997
- type: recall_at_10
value: 53.43599999999999
- type: recall_at_100
value: 78.82400000000001
- type: recall_at_1000
value: 92.78999999999999
- type: recall_at_3
value: 38.655
- type: recall_at_5
value: 45.216
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.709
- type: map_at_10
value: 34.318
- type: map_at_100
value: 35.657
- type: map_at_1000
value: 35.783
- type: map_at_3
value: 31.326999999999998
- type: map_at_5
value: 33.021
- type: mrr_at_1
value: 30.137000000000004
- type: mrr_at_10
value: 39.093
- type: mrr_at_100
value: 39.992
- type: mrr_at_1000
value: 40.056999999999995
- type: mrr_at_3
value: 36.606
- type: mrr_at_5
value: 37.861
- type: ndcg_at_1
value: 30.137000000000004
- type: ndcg_at_10
value: 39.974
- type: ndcg_at_100
value: 45.647999999999996
- type: ndcg_at_1000
value: 48.259
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 37.175999999999995
- type: precision_at_1
value: 30.137000000000004
- type: precision_at_10
value: 7.363
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 16.857
- type: precision_at_5
value: 11.963
- type: recall_at_1
value: 24.709
- type: recall_at_10
value: 52.087
- type: recall_at_100
value: 76.125
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 38.149
- type: recall_at_5
value: 43.984
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.40791666666667
- type: map_at_10
value: 32.458083333333335
- type: map_at_100
value: 33.691916666666664
- type: map_at_1000
value: 33.81191666666666
- type: map_at_3
value: 29.51625
- type: map_at_5
value: 31.168083333333335
- type: mrr_at_1
value: 27.96591666666666
- type: mrr_at_10
value: 36.528583333333344
- type: mrr_at_100
value: 37.404
- type: mrr_at_1000
value: 37.464333333333336
- type: mrr_at_3
value: 33.92883333333333
- type: mrr_at_5
value: 35.41933333333333
- type: ndcg_at_1
value: 27.96591666666666
- type: ndcg_at_10
value: 37.89141666666666
- type: ndcg_at_100
value: 43.23066666666666
- type: ndcg_at_1000
value: 45.63258333333333
- type: ndcg_at_3
value: 32.811249999999994
- type: ndcg_at_5
value: 35.22566666666667
- type: precision_at_1
value: 27.96591666666666
- type: precision_at_10
value: 6.834083333333332
- type: precision_at_100
value: 1.12225
- type: precision_at_1000
value: 0.15241666666666667
- type: precision_at_3
value: 15.264333333333335
- type: precision_at_5
value: 11.039416666666666
- type: recall_at_1
value: 23.40791666666667
- type: recall_at_10
value: 49.927083333333336
- type: recall_at_100
value: 73.44641666666668
- type: recall_at_1000
value: 90.19950000000001
- type: recall_at_3
value: 35.88341666666667
- type: recall_at_5
value: 42.061249999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.592000000000002
- type: map_at_10
value: 26.895999999999997
- type: map_at_100
value: 27.921000000000003
- type: map_at_1000
value: 28.02
- type: map_at_3
value: 24.883
- type: map_at_5
value: 25.812
- type: mrr_at_1
value: 22.698999999999998
- type: mrr_at_10
value: 29.520999999999997
- type: mrr_at_100
value: 30.458000000000002
- type: mrr_at_1000
value: 30.526999999999997
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.483999999999998
- type: ndcg_at_1
value: 22.698999999999998
- type: ndcg_at_10
value: 31.061
- type: ndcg_at_100
value: 36.398
- type: ndcg_at_1000
value: 38.89
- type: ndcg_at_3
value: 27.149
- type: ndcg_at_5
value: 28.627000000000002
- type: precision_at_1
value: 22.698999999999998
- type: precision_at_10
value: 5.106999999999999
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 11.963
- type: precision_at_5
value: 8.221
- type: recall_at_1
value: 19.592000000000002
- type: recall_at_10
value: 41.329
- type: recall_at_100
value: 66.094
- type: recall_at_1000
value: 84.511
- type: recall_at_3
value: 30.61
- type: recall_at_5
value: 34.213
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.71
- type: map_at_10
value: 20.965
- type: map_at_100
value: 21.994
- type: map_at_1000
value: 22.133
- type: map_at_3
value: 18.741
- type: map_at_5
value: 19.951
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.66
- type: mrr_at_100
value: 25.540000000000003
- type: mrr_at_1000
value: 25.629
- type: mrr_at_3
value: 22.511
- type: mrr_at_5
value: 23.72
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 30.229
- type: ndcg_at_1000
value: 33.623
- type: ndcg_at_3
value: 21.203
- type: ndcg_at_5
value: 23.006999999999998
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.725
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.14
- type: precision_at_5
value: 7.481
- type: recall_at_1
value: 14.71
- type: recall_at_10
value: 34.087
- type: recall_at_100
value: 57.147999999999996
- type: recall_at_1000
value: 81.777
- type: recall_at_3
value: 22.996
- type: recall_at_5
value: 27.73
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.472
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.867000000000004
- type: map_at_1000
value: 33.967000000000006
- type: map_at_3
value: 29.718
- type: map_at_5
value: 31.345
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 36.945
- type: mrr_at_100
value: 37.794
- type: mrr_at_1000
value: 37.857
- type: mrr_at_3
value: 34.266000000000005
- type: mrr_at_5
value: 35.768
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 38.35
- type: ndcg_at_100
value: 43.739
- type: ndcg_at_1000
value: 46.087
- type: ndcg_at_3
value: 33.004
- type: ndcg_at_5
value: 35.411
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 6.715999999999999
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.299
- type: precision_at_5
value: 10.951
- type: recall_at_1
value: 23.472
- type: recall_at_10
value: 51.413
- type: recall_at_100
value: 75.17
- type: recall_at_1000
value: 91.577
- type: recall_at_3
value: 36.651
- type: recall_at_5
value: 42.814
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.666
- type: map_at_10
value: 32.963
- type: map_at_100
value: 34.544999999999995
- type: map_at_1000
value: 34.792
- type: map_at_3
value: 29.74
- type: map_at_5
value: 31.5
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 38.013000000000005
- type: mrr_at_100
value: 38.997
- type: mrr_at_1000
value: 39.055
- type: mrr_at_3
value: 34.947
- type: mrr_at_5
value: 36.815
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.361000000000004
- type: ndcg_at_100
value: 45.186
- type: ndcg_at_1000
value: 47.867
- type: ndcg_at_3
value: 33.797
- type: ndcg_at_5
value: 36.456
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 15.876000000000001
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 23.666
- type: recall_at_10
value: 51.858000000000004
- type: recall_at_100
value: 77.805
- type: recall_at_1000
value: 94.504
- type: recall_at_3
value: 36.207
- type: recall_at_5
value: 43.094
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.662
- type: map_at_10
value: 23.594
- type: map_at_100
value: 24.593999999999998
- type: map_at_1000
value: 24.694
- type: map_at_3
value: 20.925
- type: map_at_5
value: 22.817999999999998
- type: mrr_at_1
value: 17.375
- type: mrr_at_10
value: 25.734
- type: mrr_at_100
value: 26.586
- type: mrr_at_1000
value: 26.671
- type: mrr_at_3
value: 23.044
- type: mrr_at_5
value: 24.975
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 28.186
- type: ndcg_at_100
value: 33.436
- type: ndcg_at_1000
value: 36.203
- type: ndcg_at_3
value: 23.152
- type: ndcg_at_5
value: 26.397
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.786
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 15.662
- type: recall_at_10
value: 40.066
- type: recall_at_100
value: 65.006
- type: recall_at_1000
value: 85.94000000000001
- type: recall_at_3
value: 27.400000000000002
- type: recall_at_5
value: 35.002
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.853
- type: map_at_10
value: 15.568000000000001
- type: map_at_100
value: 17.383000000000003
- type: map_at_1000
value: 17.584
- type: map_at_3
value: 12.561
- type: map_at_5
value: 14.056
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 28.288000000000004
- type: mrr_at_100
value: 29.432000000000002
- type: mrr_at_1000
value: 29.498
- type: mrr_at_3
value: 25.049
- type: mrr_at_5
value: 26.857
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 22.21
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 33.583
- type: ndcg_at_3
value: 16.994999999999997
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.192
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.573
- type: precision_at_5
value: 10.202
- type: recall_at_1
value: 8.853
- type: recall_at_10
value: 28.087
- type: recall_at_100
value: 53.701
- type: recall_at_1000
value: 76.29899999999999
- type: recall_at_3
value: 15.913
- type: recall_at_5
value: 20.658
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.077
- type: map_at_10
value: 20.788999999999998
- type: map_at_100
value: 30.429000000000002
- type: map_at_1000
value: 32.143
- type: map_at_3
value: 14.692
- type: map_at_5
value: 17.139
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.036
- type: mrr_at_100
value: 78.401
- type: mrr_at_1000
value: 78.404
- type: mrr_at_3
value: 76.75
- type: mrr_at_5
value: 77.47500000000001
- type: ndcg_at_1
value: 58.12500000000001
- type: ndcg_at_10
value: 44.015
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 56.211999999999996
- type: ndcg_at_3
value: 49.151
- type: ndcg_at_5
value: 46.195
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 35.5
- type: precision_at_100
value: 11.355
- type: precision_at_1000
value: 2.1950000000000003
- type: precision_at_3
value: 53.083000000000006
- type: precision_at_5
value: 44.800000000000004
- type: recall_at_1
value: 9.077
- type: recall_at_10
value: 26.259
- type: recall_at_100
value: 56.547000000000004
- type: recall_at_1000
value: 78.551
- type: recall_at_3
value: 16.162000000000003
- type: recall_at_5
value: 19.753999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.44500000000001
- type: f1
value: 44.67067691783401
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.182
- type: map_at_10
value: 78.223
- type: map_at_100
value: 78.498
- type: map_at_1000
value: 78.512
- type: map_at_3
value: 76.71
- type: map_at_5
value: 77.725
- type: mrr_at_1
value: 73.177
- type: mrr_at_10
value: 82.513
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.635
- type: mrr_at_3
value: 81.376
- type: mrr_at_5
value: 82.182
- type: ndcg_at_1
value: 73.177
- type: ndcg_at_10
value: 82.829
- type: ndcg_at_100
value: 83.84
- type: ndcg_at_1000
value: 84.07900000000001
- type: ndcg_at_3
value: 80.303
- type: ndcg_at_5
value: 81.846
- type: precision_at_1
value: 73.177
- type: precision_at_10
value: 10.241999999999999
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 31.247999999999998
- type: precision_at_5
value: 19.697
- type: recall_at_1
value: 68.182
- type: recall_at_10
value: 92.657
- type: recall_at_100
value: 96.709
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 85.9
- type: recall_at_5
value: 89.755
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.108
- type: map_at_10
value: 33.342
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.478
- type: map_at_3
value: 29.067
- type: map_at_5
value: 31.563000000000002
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.913000000000004
- type: mrr_at_100
value: 50.724000000000004
- type: mrr_at_1000
value: 50.766
- type: mrr_at_3
value: 47.504999999999995
- type: mrr_at_5
value: 49.033
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 41.144
- type: ndcg_at_100
value: 48.326
- type: ndcg_at_1000
value: 51.486
- type: ndcg_at_3
value: 37.486999999999995
- type: ndcg_at_5
value: 38.78
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 11.358
- type: precision_at_100
value: 1.873
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 21.108
- type: recall_at_10
value: 47.249
- type: recall_at_100
value: 74.52
- type: recall_at_1000
value: 93.31
- type: recall_at_3
value: 33.271
- type: recall_at_5
value: 39.723000000000006
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.317
- type: map_at_10
value: 64.861
- type: map_at_100
value: 65.697
- type: map_at_1000
value: 65.755
- type: map_at_3
value: 61.258
- type: map_at_5
value: 63.590999999999994
- type: mrr_at_1
value: 80.635
- type: mrr_at_10
value: 86.528
- type: mrr_at_100
value: 86.66199999999999
- type: mrr_at_1000
value: 86.666
- type: mrr_at_3
value: 85.744
- type: mrr_at_5
value: 86.24300000000001
- type: ndcg_at_1
value: 80.635
- type: ndcg_at_10
value: 73.13199999999999
- type: ndcg_at_100
value: 75.927
- type: ndcg_at_1000
value: 76.976
- type: ndcg_at_3
value: 68.241
- type: ndcg_at_5
value: 71.071
- type: precision_at_1
value: 80.635
- type: precision_at_10
value: 15.326
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.961
- type: precision_at_5
value: 28.599999999999998
- type: recall_at_1
value: 40.317
- type: recall_at_10
value: 76.631
- type: recall_at_100
value: 87.495
- type: recall_at_1000
value: 94.362
- type: recall_at_3
value: 65.94200000000001
- type: recall_at_5
value: 71.499
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.686
- type: ap
value: 87.5577120393173
- type: f1
value: 91.6629447355139
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.702
- type: map_at_10
value: 36.414
- type: map_at_100
value: 37.561
- type: map_at_1000
value: 37.605
- type: map_at_3
value: 32.456
- type: map_at_5
value: 34.827000000000005
- type: mrr_at_1
value: 24.355
- type: mrr_at_10
value: 37.01
- type: mrr_at_100
value: 38.085
- type: mrr_at_1000
value: 38.123000000000005
- type: mrr_at_3
value: 33.117999999999995
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 24.384
- type: ndcg_at_10
value: 43.456
- type: ndcg_at_100
value: 48.892
- type: ndcg_at_1000
value: 49.964
- type: ndcg_at_3
value: 35.475
- type: ndcg_at_5
value: 39.711
- type: precision_at_1
value: 24.384
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.052999999999999
- type: precision_at_5
value: 11.189
- type: recall_at_1
value: 23.702
- type: recall_at_10
value: 65.057
- type: recall_at_100
value: 90.021
- type: recall_at_1000
value: 98.142
- type: recall_at_3
value: 43.551
- type: recall_at_5
value: 53.738
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.62380300957591
- type: f1
value: 94.49871222100734
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.14090287277702
- type: f1
value: 60.32101258220515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.84330867518494
- type: f1
value: 71.92248688515255
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.10692669804976
- type: f1
value: 77.9904839122866
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.822988923078444
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.38394880253403
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.82504612539082
- type: mrr
value: 32.84462298174977
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.029
- type: map_at_10
value: 14.088999999999999
- type: map_at_100
value: 17.601
- type: map_at_1000
value: 19.144
- type: map_at_3
value: 10.156
- type: map_at_5
value: 11.892
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 56.596999999999994
- type: mrr_at_100
value: 57.11000000000001
- type: mrr_at_1000
value: 57.14
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.774
- type: ndcg_at_1
value: 44.891999999999996
- type: ndcg_at_10
value: 37.134
- type: ndcg_at_100
value: 33.652
- type: ndcg_at_1000
value: 42.548
- type: ndcg_at_3
value: 41.851
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 8.309999999999999
- type: precision_at_1000
value: 2.146
- type: precision_at_3
value: 39.422000000000004
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.029
- type: recall_at_10
value: 18.907
- type: recall_at_100
value: 33.76
- type: recall_at_1000
value: 65.14999999999999
- type: recall_at_3
value: 11.584999999999999
- type: recall_at_5
value: 14.626
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.373000000000005
- type: map_at_10
value: 55.836
- type: map_at_100
value: 56.611999999999995
- type: map_at_1000
value: 56.63
- type: map_at_3
value: 51.747
- type: map_at_5
value: 54.337999999999994
- type: mrr_at_1
value: 44.147999999999996
- type: mrr_at_10
value: 58.42699999999999
- type: mrr_at_100
value: 58.902
- type: mrr_at_1000
value: 58.914
- type: mrr_at_3
value: 55.156000000000006
- type: mrr_at_5
value: 57.291000000000004
- type: ndcg_at_1
value: 44.119
- type: ndcg_at_10
value: 63.444
- type: ndcg_at_100
value: 66.40599999999999
- type: ndcg_at_1000
value: 66.822
- type: ndcg_at_3
value: 55.962
- type: ndcg_at_5
value: 60.228
- type: precision_at_1
value: 44.119
- type: precision_at_10
value: 10.006
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.135
- type: precision_at_5
value: 17.59
- type: recall_at_1
value: 39.373000000000005
- type: recall_at_10
value: 83.78999999999999
- type: recall_at_100
value: 96.246
- type: recall_at_1000
value: 99.324
- type: recall_at_3
value: 64.71900000000001
- type: recall_at_5
value: 74.508
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.199
- type: map_at_10
value: 82.892
- type: map_at_100
value: 83.578
- type: map_at_1000
value: 83.598
- type: map_at_3
value: 79.948
- type: map_at_5
value: 81.779
- type: mrr_at_1
value: 79.67
- type: mrr_at_10
value: 86.115
- type: mrr_at_100
value: 86.249
- type: mrr_at_1000
value: 86.251
- type: mrr_at_3
value: 85.08200000000001
- type: mrr_at_5
value: 85.783
- type: ndcg_at_1
value: 79.67
- type: ndcg_at_10
value: 86.839
- type: ndcg_at_100
value: 88.252
- type: ndcg_at_1000
value: 88.401
- type: ndcg_at_3
value: 83.86200000000001
- type: ndcg_at_5
value: 85.473
- type: precision_at_1
value: 79.67
- type: precision_at_10
value: 13.19
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.677
- type: precision_at_5
value: 24.118000000000002
- type: recall_at_1
value: 69.199
- type: recall_at_10
value: 94.321
- type: recall_at_100
value: 99.20400000000001
- type: recall_at_1000
value: 99.947
- type: recall_at_3
value: 85.787
- type: recall_at_5
value: 90.365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82810046856353
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.38132611783628
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.127000000000001
- type: map_at_10
value: 12.235
- type: map_at_100
value: 14.417
- type: map_at_1000
value: 14.75
- type: map_at_3
value: 8.906
- type: map_at_5
value: 10.591000000000001
- type: mrr_at_1
value: 25.2
- type: mrr_at_10
value: 35.879
- type: mrr_at_100
value: 36.935
- type: mrr_at_1000
value: 36.997
- type: mrr_at_3
value: 32.783
- type: mrr_at_5
value: 34.367999999999995
- type: ndcg_at_1
value: 25.2
- type: ndcg_at_10
value: 20.509
- type: ndcg_at_100
value: 28.67
- type: ndcg_at_1000
value: 34.42
- type: ndcg_at_3
value: 19.948
- type: ndcg_at_5
value: 17.166
- type: precision_at_1
value: 25.2
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.214
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.533
- type: precision_at_5
value: 14.860000000000001
- type: recall_at_1
value: 5.127000000000001
- type: recall_at_10
value: 21.147
- type: recall_at_100
value: 44.946999999999996
- type: recall_at_1000
value: 72.89
- type: recall_at_3
value: 11.277
- type: recall_at_5
value: 15.042
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.0373011786213
- type: cos_sim_spearman
value: 79.27889560856613
- type: euclidean_pearson
value: 80.31186315495655
- type: euclidean_spearman
value: 79.41630415280811
- type: manhattan_pearson
value: 80.31755140442013
- type: manhattan_spearman
value: 79.43069870027611
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.8659751342045
- type: cos_sim_spearman
value: 76.95377612997667
- type: euclidean_pearson
value: 81.24552945497848
- type: euclidean_spearman
value: 77.18236963555253
- type: manhattan_pearson
value: 81.26477607759037
- type: manhattan_spearman
value: 77.13821753062756
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.34597139044875
- type: cos_sim_spearman
value: 84.124169425592
- type: euclidean_pearson
value: 83.68590721511401
- type: euclidean_spearman
value: 84.18846190846398
- type: manhattan_pearson
value: 83.57630235061498
- type: manhattan_spearman
value: 84.10244043726902
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67641885599572
- type: cos_sim_spearman
value: 80.46450725650428
- type: euclidean_pearson
value: 81.61645042715865
- type: euclidean_spearman
value: 80.61418394236874
- type: manhattan_pearson
value: 81.55712034928871
- type: manhattan_spearman
value: 80.57905670523951
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.86650310886782
- type: cos_sim_spearman
value: 89.76081629222328
- type: euclidean_pearson
value: 89.1530747029954
- type: euclidean_spearman
value: 89.80990657280248
- type: manhattan_pearson
value: 89.10640563278132
- type: manhattan_spearman
value: 89.76282108434047
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.93864027911118
- type: cos_sim_spearman
value: 85.47096193999023
- type: euclidean_pearson
value: 85.03141840870533
- type: euclidean_spearman
value: 85.43124029598181
- type: manhattan_pearson
value: 84.99002664393512
- type: manhattan_spearman
value: 85.39169195120834
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.7045343749832
- type: cos_sim_spearman
value: 89.03262221146677
- type: euclidean_pearson
value: 89.56078218264365
- type: euclidean_spearman
value: 89.17827006466868
- type: manhattan_pearson
value: 89.52717595468582
- type: manhattan_spearman
value: 89.15878115952923
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.20191302875551
- type: cos_sim_spearman
value: 64.11446552557646
- type: euclidean_pearson
value: 64.6918197393619
- type: euclidean_spearman
value: 63.440182631197764
- type: manhattan_pearson
value: 64.55692904121835
- type: manhattan_spearman
value: 63.424877742756266
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.37793104662344
- type: cos_sim_spearman
value: 87.7357802629067
- type: euclidean_pearson
value: 87.4286301545109
- type: euclidean_spearman
value: 87.78452920777421
- type: manhattan_pearson
value: 87.42445169331255
- type: manhattan_spearman
value: 87.78537677249598
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.31465405081792
- type: mrr
value: 95.7173781193389
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.904
- type: map_at_100
value: 68.539
- type: map_at_1000
value: 68.562
- type: map_at_3
value: 65.415
- type: map_at_5
value: 66.788
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 68.797
- type: mrr_at_100
value: 69.236
- type: mrr_at_1000
value: 69.257
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.967
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 72.24199999999999
- type: ndcg_at_100
value: 74.86
- type: ndcg_at_1000
value: 75.354
- type: ndcg_at_3
value: 67.93400000000001
- type: ndcg_at_5
value: 70.02199999999999
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.778000000000002
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.383
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8029702970297
- type: cos_sim_ap
value: 94.9210324173411
- type: cos_sim_f1
value: 89.8521162672106
- type: cos_sim_precision
value: 91.67533818938605
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.69504950495049
- type: dot_ap
value: 90.4919719146181
- type: dot_f1
value: 84.72289156626506
- type: dot_precision
value: 81.76744186046511
- type: dot_recall
value: 87.9
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.87827463795753
- type: euclidean_f1
value: 89.55680081507896
- type: euclidean_precision
value: 91.27725856697819
- type: euclidean_recall
value: 87.9
- type: manhattan_accuracy
value: 99.7990099009901
- type: manhattan_ap
value: 94.87587025149682
- type: manhattan_f1
value: 89.76298537569339
- type: manhattan_precision
value: 90.53916581892166
- type: manhattan_recall
value: 89
- type: max_accuracy
value: 99.8029702970297
- type: max_ap
value: 94.9210324173411
- type: max_f1
value: 89.8521162672106
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92385753948724
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.671756975431144
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.677928036739004
- type: mrr
value: 51.56413133435193
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.523589340819683
- type: cos_sim_spearman
value: 30.187407518823235
- type: dot_pearson
value: 29.039713969699015
- type: dot_spearman
value: 29.114740651155508
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.211
- type: map_at_10
value: 1.6199999999999999
- type: map_at_100
value: 8.658000000000001
- type: map_at_1000
value: 21.538
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.919
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.18599999999999
- type: mrr_at_100
value: 86.18599999999999
- type: mrr_at_1000
value: 86.18599999999999
- type: mrr_at_3
value: 85
- type: mrr_at_5
value: 85.9
- type: ndcg_at_1
value: 74
- type: ndcg_at_10
value: 66.542
- type: ndcg_at_100
value: 50.163999999999994
- type: ndcg_at_1000
value: 45.696999999999996
- type: ndcg_at_3
value: 71.531
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 78
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 51.06
- type: precision_at_1000
value: 20.022000000000002
- type: precision_at_3
value: 76
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.211
- type: recall_at_10
value: 1.813
- type: recall_at_100
value: 12.098
- type: recall_at_1000
value: 42.618
- type: recall_at_3
value: 0.603
- type: recall_at_5
value: 0.987
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.2079999999999997
- type: map_at_10
value: 7.777000000000001
- type: map_at_100
value: 12.825000000000001
- type: map_at_1000
value: 14.196
- type: map_at_3
value: 4.285
- type: map_at_5
value: 6.177
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 42.635
- type: mrr_at_100
value: 43.955
- type: mrr_at_1000
value: 43.955
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.088
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 31.840000000000003
- type: ndcg_at_1000
value: 43.191
- type: ndcg_at_3
value: 23.45
- type: ndcg_at_5
value: 22.994
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 6.755
- type: precision_at_1000
value: 1.4200000000000002
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 2.2079999999999997
- type: recall_at_10
value: 13.144
- type: recall_at_100
value: 42.491
- type: recall_at_1000
value: 77.04299999999999
- type: recall_at_3
value: 5.3469999999999995
- type: recall_at_5
value: 9.139
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9044
- type: ap
value: 14.625783489340755
- type: f1
value: 54.814936562590546
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.94227504244483
- type: f1
value: 61.22516038508854
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.602409155145864
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.94641473445789
- type: cos_sim_ap
value: 76.91572747061197
- type: cos_sim_f1
value: 70.14348097317529
- type: cos_sim_precision
value: 66.53254437869822
- type: cos_sim_recall
value: 74.1688654353562
- type: dot_accuracy
value: 84.80061989628658
- type: dot_ap
value: 70.7952548895177
- type: dot_f1
value: 65.44780728844965
- type: dot_precision
value: 61.53310104529617
- type: dot_recall
value: 69.89445910290237
- type: euclidean_accuracy
value: 86.94641473445789
- type: euclidean_ap
value: 76.80774009393652
- type: euclidean_f1
value: 70.30522503879979
- type: euclidean_precision
value: 68.94977168949772
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 86.8629671574179
- type: manhattan_ap
value: 76.76518632600317
- type: manhattan_f1
value: 70.16056518946692
- type: manhattan_precision
value: 68.360450563204
- type: manhattan_recall
value: 72.0580474934037
- type: max_accuracy
value: 86.94641473445789
- type: max_ap
value: 76.91572747061197
- type: max_f1
value: 70.30522503879979
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.10428066907285
- type: cos_sim_ap
value: 86.25114759921435
- type: cos_sim_f1
value: 78.37857884586856
- type: cos_sim_precision
value: 75.60818546078993
- type: cos_sim_recall
value: 81.35971666153372
- type: dot_accuracy
value: 87.41995575736406
- type: dot_ap
value: 81.51838010086782
- type: dot_f1
value: 74.77398015435503
- type: dot_precision
value: 71.53002390662354
- type: dot_recall
value: 78.32614721281182
- type: euclidean_accuracy
value: 89.12368533395428
- type: euclidean_ap
value: 86.33456799874504
- type: euclidean_f1
value: 78.45496750232127
- type: euclidean_precision
value: 75.78388462366364
- type: euclidean_recall
value: 81.32121958731136
- type: manhattan_accuracy
value: 89.10622113556099
- type: manhattan_ap
value: 86.31215061745333
- type: manhattan_f1
value: 78.40684906011539
- type: manhattan_precision
value: 75.89536643366722
- type: manhattan_recall
value: 81.09023714197721
- type: max_accuracy
value: 89.12368533395428
- type: max_ap
value: 86.33456799874504
- type: max_f1
value: 78.45496750232127
language:
- en
license: mit
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-e5-large-v2"
model_name_orig="intfloat/e5-large-v2"
from hf_hub_ctranslate2 import EncoderCT2fromHfHub
model = EncoderCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
max_length=64,
) # perform downstream tasks on outputs
outputs["pooler_output"]
outputs["last_hidden_state"]
outputs["attention_mask"]
# alternative, use SentenceTransformer Mix-In
# for end-to-end Sentence embeddings generation
# (not pulling from this CT2fast-HF repo)
from hf_hub_ctranslate2 import CT2SentenceTransformer
model = CT2SentenceTransformer(
model_name_orig, compute_type="int8_float16", device="cuda"
)
embeddings = model.encode(
["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
batch_size=32,
convert_to_numpy=True,
normalize_embeddings=True,
)
print(embeddings.shape, embeddings)
scores = (embeddings @ embeddings.T) * 100
# Hint: you can also host this code via REST API and
# via github.com/michaelfeil/infinity
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-10-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# E5-large-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-v2')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
| 69,906 | [
[
-0.01386260986328125,
-0.058135986328125,
0.024017333984375,
0.0240325927734375,
-0.020263671875,
-0.02679443359375,
-0.01019287109375,
-0.031524658203125,
0.0075836181640625,
0.0216827392578125,
-0.0261383056640625,
-0.046295166015625,
-0.06903076171875,
0.... |
gokuls/sa_BERT_24_cola | 2023-06-15T21:09:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/sa_BERT_24_cola | 0 | 2 | transformers | 2023-06-15T20:59:44 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: sa_BERT_24_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_24_cola
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6120
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6138 | 1.0 | 90 | 0.6120 | 0.0 | 0.6913 |
| 0.5898 | 2.0 | 180 | 0.6242 | 0.0656 | 0.6932 |
| 0.5491 | 3.0 | 270 | 0.6798 | 0.0733 | 0.6405 |
| 0.5027 | 4.0 | 360 | 0.6873 | 0.0667 | 0.6328 |
| 0.4549 | 5.0 | 450 | 0.7841 | 0.1025 | 0.6299 |
| 0.4177 | 6.0 | 540 | 0.8221 | 0.0827 | 0.5849 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
| 2,297 | [
[
-0.0233612060546875,
-0.046051025390625,
-0.0018301010131835938,
0.017974853515625,
-0.01727294921875,
-0.01343536376953125,
-0.00881195068359375,
-0.01904296875,
0.0292510986328125,
0.00983428955078125,
-0.053680419921875,
-0.03216552734375,
-0.0533447265625,
... |
gokuls/sa_BERT_48_cola | 2023-06-15T21:15:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/sa_BERT_48_cola | 0 | 2 | transformers | 2023-06-15T21:03:59 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: sa_BERT_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.08674114303226269
- name: Accuracy
type: accuracy
value: 0.6951102614402771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_48_cola
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6131
- Matthews Correlation: 0.0867
- Accuracy: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6218 | 1.0 | 90 | 0.6131 | 0.0 | 0.6913 |
| 0.5938 | 2.0 | 180 | 0.6131 | 0.0867 | 0.6951 |
| 0.5535 | 3.0 | 270 | 0.6607 | 0.0838 | 0.6472 |
| 0.5036 | 4.0 | 360 | 0.6657 | 0.0746 | 0.6721 |
| 0.4578 | 5.0 | 450 | 0.8804 | 0.1134 | 0.6222 |
| 0.4167 | 6.0 | 540 | 0.7758 | 0.0645 | 0.6194 |
| 0.3852 | 7.0 | 630 | 0.8373 | 0.0386 | 0.6299 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
| 2,401 | [
[
-0.0251312255859375,
-0.0445556640625,
-0.0022945404052734375,
0.0149383544921875,
-0.0159912109375,
-0.01471710205078125,
-0.007110595703125,
-0.0170440673828125,
0.031036376953125,
0.01068115234375,
-0.0518798828125,
-0.034088134765625,
-0.052978515625,
-0... |
Kamaljp/topic_docs5000 | 2023-06-16T01:39:22.000Z | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | Kamaljp | null | null | Kamaljp/topic_docs5000 | 0 | 2 | bertopic | 2023-06-16T01:36:55 |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# topic_docs5000
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("Kamaljp/topic_docs5000")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 30
* Number of training documents: 5000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | the - to - of - and - is | 12 | -1_the_to_of_and |
| 0 | the - in - to - he - game | 1606 | 0_the_in_to_he |
| 1 | the - drive - to - with - for | 450 | 1_the_drive_to_with |
| 2 | the - to - that - of - and | 344 | 2_the_to_that_of |
| 3 | the - of - and - in - to | 246 | 3_the_of_and_in |
| 4 | of - to - the - is - and | 220 | 4_of_to_the_is |
| 5 | the - car - and - it - for | 203 | 5_the_car_and_it |
| 6 | the - of - that - to - is | 186 | 6_the_of_that_to |
| 7 | call - three - bittrolff - uhhhh - test | 172 | 7_call_three_bittrolff_uhhhh |
| 8 | the - to - be - of - key | 172 | 8_the_to_be_of |
| 9 | the - space - of - and - to | 169 | 9_the_space_of_and |
| 10 | the - openwindows - to - window - and | 169 | 10_the_openwindows_to_window |
| 11 | for - and - 100 - to - the | 146 | 11_for_and_100_to |
| 12 | windows - dos - the - and - to | 132 | 12_windows_dos_the_and |
| 13 | the - bike - to - my - was | 105 | 13_the_bike_to_my |
| 14 | you - that - to - of - your | 100 | 14_you_that_to_of |
| 15 | for - and - to - mail - send | 100 | 15_for_and_to_mail |
| 16 | to - that - homosexual - of - is | 94 | 16_to_that_homosexual_of |
| 17 | is - that - objective - of - science | 66 | 17_is_that_objective_of |
| 18 | printer - fonts - deskjet - hp - the | 56 | 18_printer_fonts_deskjet_hp |
| 19 | jpeg - image - gif - file - format | 45 | 19_jpeg_image_gif_file |
| 20 | points - graeme - polygon - the - lines | 44 | 20_points_graeme_polygon_the |
| 21 | radar - detector - detectors - is - the | 28 | 21_radar_detector_detectors_is |
| 22 | hotel - dj - for - ticket - price | 27 | 22_hotel_dj_for_ticket |
| 23 | insurance - health - private - the - and | 26 | 23_insurance_health_private_the |
| 24 | water - battery - temperature - the - discharge | 21 | 24_water_battery_temperature_the |
| 25 | oil - paint - it - wax - and | 17 | 25_oil_paint_it_wax |
| 26 | drugs - cocaine - lsd - drug - license | 16 | 26_drugs_cocaine_lsd_drug |
| 27 | motif - toolkit - cosecomplient - api - mean | 15 | 27_motif_toolkit_cosecomplient_api |
| 28 | maxaxaxaxaxaxaxaxaxaxaxaxaxaxax - entry - entries - rules - we | 13 | 28_maxaxaxaxaxaxaxaxaxaxaxaxaxaxax_entry_entries_rules |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: 30
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
| 3,516 | [
[
-0.051971435546875,
-0.046661376953125,
0.0305023193359375,
0.030670166015625,
-0.0214385986328125,
-0.0189056396484375,
-0.01274871826171875,
-0.01317596435546875,
0.0321044921875,
0.01142120361328125,
-0.053863525390625,
-0.047760009765625,
-0.052642822265625,... |
justinhoang/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-16T02:46:37.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | justinhoang | null | null | justinhoang/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-16T02:45:57 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 619.00 +/- 103.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga justinhoang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga justinhoang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga justinhoang
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,768 | [
[
-0.042999267578125,
-0.039306640625,
0.019805908203125,
0.025787353515625,
-0.011627197265625,
-0.0172119140625,
0.01021575927734375,
-0.013336181640625,
0.0128631591796875,
0.0224609375,
-0.0721435546875,
-0.03497314453125,
-0.025421142578125,
-0.0038833618... |
SkeVinci/ppo-Huggy | 2023-06-16T06:04:27.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | SkeVinci | null | null | SkeVinci/ppo-Huggy | 0 | 2 | ml-agents | 2023-06-16T06:04:24 | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SkeVinci/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,318 | [
[
-0.04156494140625,
-0.045196533203125,
0.0172271728515625,
0.0030727386474609375,
-0.0164337158203125,
0.016082763671875,
0.0130767822265625,
-0.0227203369140625,
0.041778564453125,
0.033966064453125,
-0.04864501953125,
-0.04632568359375,
-0.0303802490234375,
... |
IvanKun/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-16T12:28:30.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | IvanKun | null | null | IvanKun/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-16T12:27:58 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 574.00 +/- 92.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga IvanKun -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga IvanKun -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga IvanKun
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,753 | [
[
-0.043365478515625,
-0.039703369140625,
0.0196075439453125,
0.025848388671875,
-0.01131439208984375,
-0.0181121826171875,
0.0096435546875,
-0.01335906982421875,
0.0130615234375,
0.0227813720703125,
-0.07177734375,
-0.03509521484375,
-0.0251922607421875,
-0.0... |
TheBloke/airoboros-13B-gpt4-1.2-GPTQ | 2023-08-21T08:36:05.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/airoboros-13B-gpt4-1.2-GPTQ | 7 | 2 | transformers | 2023-06-16T12:37:58 | ---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# John Durbin's Airoboros 13B GPT4 1.2 GPTQ
These files are GPTQ 4bit model files for [John Durbin's Airoboros 13B GPT4 1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
* [Unquantised fp32 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-13B-gpt4-1.2-GPTQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `airoboros-13B-gpt4-1.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
model_name_or_path = "TheBloke/airoboros-13B-gpt4-1.2-GPTQ"
model_basename = "airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.no-act-order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
# Note: check the prompt template is correct for this model.
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
**airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.act.order.safetensors**
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
* `airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.no-act-order.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = 128. Act Order / desc_act = True.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: John Durbin's Airoboros 13B GPT4 1.2
### Overview
This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
| 10,964 | [
[
-0.037994384765625,
-0.053436279296875,
0.0153045654296875,
0.00337982177734375,
-0.023651123046875,
0.0019779205322265625,
0.01108551025390625,
-0.0318603515625,
0.0177764892578125,
0.0082550048828125,
-0.04461669921875,
-0.0304412841796875,
-0.032684326171875,... |
hopkins/mbart-finetuned-eng-fra | 2023-09-06T18:10:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | hopkins | null | null | hopkins/mbart-finetuned-eng-fra | 0 | 2 | transformers | 2023-06-16T14:04:57 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-fra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-fra
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9442
- Bleu: 37.6404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,224 | [
[
-0.045989990234375,
-0.053985595703125,
0.01580810546875,
0.01488494873046875,
-0.029541015625,
-0.037017822265625,
-0.0189208984375,
-0.01202392578125,
0.0156402587890625,
0.0261688232421875,
-0.057952880859375,
-0.03369140625,
-0.043731689453125,
-0.007888... |
zlsl/l_warhammer3 | 2023-08-11T14:01:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"russian",
"warhammer",
"wh40k",
"warhammer40k",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | zlsl | null | null | zlsl/l_warhammer3 | 2 | 2 | transformers | 2023-06-16T16:32:54 | ---
license: apache-2.0
language:
- ru
library_name: transformers
pipeline_tag: text-generation
tags:
- russian
- warhammer
- wh40k
- warhammer40k
widget:
- text: "Конрад Керз подошел к огромным воротам"
- text: "Болтер был украшен"
- text: "Толпа хаоситов приближалась"
- text: "Это был инквизитор"
---
Генерация текстов по вселенной Warhammer40k на русском языке. Датасет ~1.2 Гб текстов по вселенной.
## Для пользователей text-generation-webui
В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер.
Ошибка такая:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br>
<code>tokenizer.eos_token_id = 2</code><br>
перед<br>
<code>return tokenizer</code>
>И в этот миг я увидел, как из темноты появилась фигура человека с мечом и щитом. Он был облачен в черное одеяние, а на его лице красовалась маска смерти. Я узнал его — это был тот самый воин-призрак, который спас меня от огня варпа. Его лицо было скрыто под капюшоном, но я мог разглядеть черты воина: он носил маску, которая была похожа на черепную коробку.
>
>— Ты опоздал, брат мой, — произнес призрак голосом, похожим на скрежет металла по стеклу. — Ты не должен был приходить сюда.
>
>Я попытался ответить ему, но слова застряли у меня во рту. В моем разуме всплыла картина того, что произошло со мной после возвращения в реальность. Мне вспомнились все мои страхи и сомнения, которые я испытывал перед лицом этого существа. И теперь я понял, почему оно так сильно отличалось от всех остальных воинов ордена. Оно выглядело таким же сильным и уверенным в себе, каким я видел себя в воспоминаниях о своем первом посещении этой комнаты. Но при этом мне показалось, что его облик изменился. Воин выглядел более старым и уставшим, чем когда бы то ни было за всю мою жизнь. На нем были доспехи цвета слоновой кости, украшенные золотом и серебром. Из нагрудника торчал меч, рукоять которого оканчивалась двумя шипами. А на груди виднелся символ, похожий на голову орка.
>В зал ворвался Абаддон.
>
>- Что происходит? — спросил он, не обращая внимания на остальных воинов своего Легиона.
>
>— Я хочу знать! — рявкнул Абаддон и повернулся к своему брату-капитану: — Что ты видишь?
>
>Абаддон указал на стену зала. В ней зияла огромная дыра, сквозь которую пробивался свет. Он был настолько ярким, что даже в центре помещения его было видно невооруженным глазом.
>
>— Это портал, брат мой, — сказал Абаддон. — Мы должны идти туда.
>
>Он повел своих братьев вперед по коридору, который вел из тронного зала.
>
>— Стойте здесь, братья мои, — приказал Абаддон. — Я иду первым.
>
>Оба воина шагнули через дыру в стене и оказались в огромном зале с высоким потолком. Здесь царил полумрак, но Абаддон разглядел множество статуй, изображавших героев древности. Они стояли рядами вдоль стен, а их доспехи были покрыты пылью веков. Вдоль одной стены тянулся длинный ряд скамей, а у противоположной стены стоял огромный стол, заваленный свитками пергамента. На столе лежали карты, которые Абаддон взял со стола.
| 3,129 | [
[
-0.048370361328125,
-0.03179931640625,
0.014617919921875,
0.0057373046875,
-0.0285797119140625,
0.0078277587890625,
0.012969970703125,
-0.022918701171875,
0.050994873046875,
0.0092620849609375,
-0.03839111328125,
-0.040679931640625,
-0.0304412841796875,
0.00... |
zlsl/m_physics | 2023-08-11T14:02:21.000Z | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"physics",
"физика",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | zlsl | null | null | zlsl/m_physics | 0 | 2 | transformers | 2023-06-16T16:54:34 | ---
license: apache-2.0
language:
- ru
library_name: transformers
pipeline_tag: text-generation
tags:
- physics
- физика
---
Генерация текстов, тематика: физика
## Для пользователей text-generation-webui
В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер.
Ошибка такая:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br>
<code>tokenizer.eos_token_id = 2</code><br>
перед<br>
<code>return tokenizer</code>
| 585 | [
[
-0.01212310791015625,
-0.04364013671875,
0.01499176025390625,
0.0198974609375,
-0.03277587890625,
0.00074005126953125,
0.029144287109375,
-0.003040313720703125,
0.00980377197265625,
-0.004398345947265625,
-0.052764892578125,
-0.029083251953125,
-0.02897644042968... |
zlsl/m_cosmos | 2023-08-11T14:02:14.000Z | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"russian",
"cosmos",
"astrophysics",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | zlsl | null | null | zlsl/m_cosmos | 1 | 2 | transformers | 2023-06-16T17:00:47 | ---
license: apache-2.0
language:
- ru
library_name: transformers
pipeline_tag: text-generation
tags:
- russian
- cosmos
- astrophysics
---
Модель для генерации текстов по космосу, астрофизике, космонавтике
## Для пользователей text-generation-webui
В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер.
Ошибка такая:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br>
<code>tokenizer.eos_token_id = 2</code><br>
перед<br>
<code>return tokenizer</code>
| 631 | [
[
-0.01305389404296875,
-0.0455322265625,
0.0164794921875,
0.0186920166015625,
-0.03558349609375,
0.0042266845703125,
0.0300445556640625,
-0.0026092529296875,
0.0169525146484375,
-0.0020923614501953125,
-0.052734375,
-0.028045654296875,
-0.0299224853515625,
0.... |
JvThunder/ppo-Pyramids | 2023-06-16T18:37:51.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | JvThunder | null | null | JvThunder/ppo-Pyramids | 0 | 2 | ml-agents | 2023-06-16T18:37:41 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JvThunder/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,334 | [
[
-0.040283203125,
-0.034271240234375,
0.0021514892578125,
0.01371002197265625,
-0.01055908203125,
0.01251983642578125,
0.0167083740234375,
-0.01461029052734375,
0.033477783203125,
0.0304412841796875,
-0.040557861328125,
-0.05010986328125,
-0.0296173095703125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.