modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
antonkurylo/t5-base-news_headlines_7 | 2023-06-06T08:15:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"en",
"dataset:valurank/News_headlines",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | antonkurylo | null | null | antonkurylo/t5-base-news_headlines_7 | 0 | 2 | transformers | 2023-06-02T18:59:48 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
widget:
- text: >
As the demands of climate change grow, businesses are realizing the
imperative of embracing sustainability. Driven by ecological necessity and
evolving consumer expectations, this shift necessitates a complete overhaul
of traditional business models towards a circular economy, emphasizing
resource efficiency and waste reduction.
Adopting sustainable practices offers businesses multiple benefits: reduced operating costs, enhanced brand reputation, and increased customer loyalty. As such, sustainability is a strategic tool for businesses looking to future-proof themselves.
Companies like Unilever and Tesla serve as models of this transformation. Unilever's sustainable living brands have outperformed the rest of their portfolio, while Tesla's entire business model centres around sustainability, proving that environmental consciousness and profitability can coexist.
In our interconnected world, the impacts of businesses extend to society and the environment, necessitating alignment with the global push for sustainability. With sustainability no longer being a choice but an imperative, businesses adopting it will be the leaders in the new business paradigm. In a nutshell, to thrive in the evolving market, embracing sustainability is the new business imperative. The future of business is unquestionably green.
model-index:
- name: t5-base-news_headlines
results: []
language:
- en
datasets:
- valurank/News_headlines
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-news_headlines
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an valurank/News_headlines dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9975
- Rouge1: 53.7064
- Rouge2: 34.6278
- Rougel: 50.5129
- Rougelsum: 50.5108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="antonkurylo/t5-base-news_headlines_7")
text = "As the demands of climate change grow, businesses are realizing the imperative of embracing sustainability." \
"Driven by ecological necessity and evolving consumer expectations, this shift necessitates a complete " \
"overhaul of traditional business models towards a circular economy, emphasizing resource efficiency and " \
"waste reduction.\nAdopting sustainable practices offers businesses multiple benefits: reduced operating " \
"costs, enhanced brand reputation, and increased customer loyalty. As such, sustainability is a strategic " \
"tool for businesses looking to future-proof themselves.\nCompanies like Unilever and Tesla serve as " \
"models of this transformation. Unilever's sustainable living brands have outperformed the rest of their " \
"portfolio, while Tesla's entire business model centres around sustainability, proving that environmental " \
"consciousness and profitability can coexist.\nIn our interconnected world, the impacts of businesses " \
"extend to society and the environment, necessitating alignment with the global push for sustainability. " \
"With sustainability no longer being a choice but an imperative, businesses adopting it will be the " \
"leaders in the new business paradigm. In a nutshell, to thrive in the evolving market, embracing " \
"sustainability is the new business imperative. The future of business is unquestionably green."
summarizer(text)
```
### Expected Output
```
[{'summary_text': "The future of business is unquestionably green. Here's how it works . Unilever and Tesla are examples of the transformation"}]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- weight_decay: 0.01
- lr_scheduler_type: linear
- num_epochs: 7
- max_text_length: 512
- max_target_length: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.9933 | 1.0 | 1531 | 1.4942 | 44.2439 | 22.1239 | 40.5281 | 40.5525 |
| 1.6029 | 2.0 | 3062 | 1.2824 | 46.5726 | 25.1122 | 43.131 | 43.151 |
| 1.409 | 3.0 | 4593 | 1.2358 | 48.3188 | 27.7403 | 44.9576 | 45.0009 |
| 1.2699 | 4.0 | 6124 | 1.1600 | 50.9858 | 30.6655 | 47.775 | 47.8414 |
| 1.1696 | 5.0 | 7655 | 1.0607 | 52.2212 | 32.6952 | 49.0023 | 49.0812 |
| 1.0934 | 6.0 | 9186 | 1.0173 | 53.1629 | 33.9552 | 49.9629 | 50.0118 |
| 1.049 | 7.0 | 10717 | 0.9975 | 53.7064 | 34.6278 | 50.5129 | 50.5108 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 5,383 | [
[
-0.031158447265625,
-0.045135498046875,
0.0118255615234375,
0.007396697998046875,
-0.01457977294921875,
0.0000635981559753418,
-0.001556396484375,
-0.0281219482421875,
0.0286712646484375,
0.02655029296875,
-0.0576171875,
-0.049285888671875,
-0.0318603515625,
... |
huantd/distilbert-base-uncased-finetuned-sst-2-english | 2023-06-02T22:15:39.000Z | [
"transformers.js",
"onnx",
"distilbert",
"text-classification",
"region:us"
] | text-classification | huantd | null | null | huantd/distilbert-base-uncased-finetuned-sst-2-english | 0 | 2 | transformers.js | 2023-06-02T22:13:54 | ---
library_name: "transformers.js"
---
https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english with ONNX weights to be compatible with Transformers.js.
| 169 | [
[
-0.0170135498046875,
-0.01131439208984375,
0.01222991943359375,
0.062744140625,
-0.0183868408203125,
0.0276641845703125,
-0.006072998046875,
-0.01255035400390625,
0.059112548828125,
0.0242462158203125,
-0.068115234375,
-0.010528564453125,
-0.045379638671875,
... |
platzi/platzi-distilroberta-base-mrpc-glue-rafa-rivera | 2023-06-03T03:55:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-rafa-rivera | 0 | 2 | transformers | 2023-06-02T23:45:47 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-rafa-rivera
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8235294117647058
- name: F1
type: f1
value: 0.8686131386861314
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-rafa-rivera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8316
- Accuracy: 0.8235
- F1: 0.8686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3147 | 1.09 | 500 | 1.0949 | 0.8064 | 0.8640 |
| 0.2804 | 2.18 | 1000 | 0.8316 | 0.8235 | 0.8686 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,418 | [
[
-0.0308380126953125,
-0.0338134765625,
0.00965118408203125,
0.0237884521484375,
-0.0260467529296875,
-0.0228729248046875,
-0.0114898681640625,
-0.0018243789672851562,
0.004123687744140625,
0.0107269287109375,
-0.05364990234375,
-0.047210693359375,
-0.05587768554... |
wiorz/bert_sm_cv_4 | 2023-06-03T04:37:17.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_sm_cv_4 | 0 | 2 | transformers | 2023-06-03T04:36:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_sm_cv_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sm_cv_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3264
- Accuracy: 0.822
- Precision: 0.5714
- Recall: 0.3487
- F1: 0.4331
- D-index: 1.5816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.5342 | 1.0 | 1000 | 0.4182 | 0.828 | 0.6117 | 0.3231 | 0.4228 | 1.5811 |
| 0.5562 | 2.0 | 2000 | 0.5091 | 0.825 | 0.5943 | 0.3231 | 0.4186 | 1.5770 |
| 0.5531 | 3.0 | 3000 | 0.6970 | 0.821 | 0.5678 | 0.3436 | 0.4281 | 1.5785 |
| 0.4464 | 4.0 | 4000 | 0.9186 | 0.816 | 0.5495 | 0.3128 | 0.3987 | 1.5615 |
| 0.3459 | 5.0 | 5000 | 1.0847 | 0.815 | 0.5510 | 0.2769 | 0.3686 | 1.5480 |
| 0.2035 | 6.0 | 6000 | 1.2288 | 0.818 | 0.5504 | 0.3641 | 0.4383 | 1.5813 |
| 0.2029 | 7.0 | 7000 | 1.3880 | 0.811 | 0.5395 | 0.2103 | 0.3026 | 1.5198 |
| 0.0907 | 8.0 | 8000 | 1.6336 | 0.824 | 0.6 | 0.2923 | 0.3931 | 1.5654 |
| 0.1161 | 9.0 | 9000 | 1.6379 | 0.799 | 0.4821 | 0.4154 | 0.4463 | 1.5729 |
| 0.0516 | 10.0 | 10000 | 1.6650 | 0.812 | 0.5304 | 0.3128 | 0.3935 | 1.5561 |
| 0.0249 | 11.0 | 11000 | 1.8710 | 0.815 | 0.5410 | 0.3385 | 0.4164 | 1.5688 |
| 0.0097 | 12.0 | 12000 | 1.9980 | 0.821 | 0.5741 | 0.3179 | 0.4092 | 1.5700 |
| 0.0047 | 13.0 | 13000 | 2.1137 | 0.821 | 0.5930 | 0.2615 | 0.3630 | 1.5509 |
| 0.0001 | 14.0 | 14000 | 2.1541 | 0.825 | 0.5893 | 0.3385 | 0.4300 | 1.5822 |
| 0.0038 | 15.0 | 15000 | 2.2491 | 0.814 | 0.5338 | 0.3641 | 0.4329 | 1.5760 |
| 0.0063 | 16.0 | 16000 | 2.2822 | 0.818 | 0.5546 | 0.3385 | 0.4204 | 1.5728 |
| 0.0 | 17.0 | 17000 | 2.3280 | 0.815 | 0.5373 | 0.3692 | 0.4377 | 1.5790 |
| 0.011 | 18.0 | 18000 | 2.3034 | 0.822 | 0.5714 | 0.3487 | 0.4331 | 1.5816 |
| 0.0 | 19.0 | 19000 | 2.3205 | 0.822 | 0.5714 | 0.3487 | 0.4331 | 1.5816 |
| 0.0054 | 20.0 | 20000 | 2.3264 | 0.822 | 0.5714 | 0.3487 | 0.4331 | 1.5816 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,561 | [
[
-0.045013427734375,
-0.038787841796875,
0.013641357421875,
0.007541656494140625,
-0.005458831787109375,
-0.004390716552734375,
0.0008988380432128906,
-0.004795074462890625,
0.0440673828125,
0.025726318359375,
-0.047943115234375,
-0.0484619140625,
-0.042541503906... |
wiorz/bert_sm_gen1 | 2023-06-03T21:09:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_sm_gen1 | 0 | 2 | transformers | 2023-06-03T04:55:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_sm_gen1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sm_gen1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1391
- Accuracy: 0.829
- Precision: 0.5241
- Recall: 0.4270
- F1: 0.4706
- D-index: 1.6122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.4764 | 1.0 | 1000 | 0.4123 | 0.828 | 0.5455 | 0.2022 | 0.2951 | 1.5328 |
| 0.5783 | 2.0 | 2000 | 0.6138 | 0.83 | 0.5690 | 0.1854 | 0.2797 | 1.5295 |
| 0.5735 | 3.0 | 3000 | 0.7900 | 0.801 | 0.4525 | 0.5618 | 0.5013 | 1.6205 |
| 0.4248 | 4.0 | 4000 | 0.9244 | 0.84 | 0.6875 | 0.1854 | 0.2920 | 1.5429 |
| 0.2873 | 5.0 | 5000 | 1.0765 | 0.815 | 0.4774 | 0.4157 | 0.4444 | 1.5899 |
| 0.2717 | 6.0 | 6000 | 1.1807 | 0.814 | 0.4661 | 0.3090 | 0.3716 | 1.5518 |
| 0.2166 | 7.0 | 7000 | 1.2673 | 0.821 | 0.4970 | 0.4607 | 0.4781 | 1.6131 |
| 0.1294 | 8.0 | 8000 | 1.5151 | 0.808 | 0.4628 | 0.4888 | 0.4754 | 1.6054 |
| 0.0485 | 9.0 | 9000 | 1.6610 | 0.823 | 0.504 | 0.3539 | 0.4158 | 1.5794 |
| 0.0522 | 10.0 | 10000 | 1.8193 | 0.802 | 0.4519 | 0.5281 | 0.4870 | 1.6106 |
| 0.0307 | 11.0 | 11000 | 1.7044 | 0.828 | 0.5211 | 0.4157 | 0.4625 | 1.6071 |
| 0.0196 | 12.0 | 12000 | 1.8297 | 0.818 | 0.4873 | 0.4326 | 0.4583 | 1.5996 |
| 0.0048 | 13.0 | 13000 | 1.9419 | 0.827 | 0.5188 | 0.3876 | 0.4437 | 1.5962 |
| 0.0098 | 14.0 | 14000 | 2.0127 | 0.828 | 0.5211 | 0.4157 | 0.4625 | 1.6071 |
| 0.0082 | 15.0 | 15000 | 2.0195 | 0.833 | 0.5420 | 0.3989 | 0.4595 | 1.6079 |
| 0.0 | 16.0 | 16000 | 2.0748 | 0.827 | 0.5161 | 0.4494 | 0.4805 | 1.6172 |
| 0.0 | 17.0 | 17000 | 2.0948 | 0.831 | 0.5319 | 0.4213 | 0.4702 | 1.6129 |
| 0.0 | 18.0 | 18000 | 2.1141 | 0.831 | 0.5338 | 0.3989 | 0.4566 | 1.6053 |
| 0.0 | 19.0 | 19000 | 2.1411 | 0.828 | 0.5205 | 0.4270 | 0.4691 | 1.6109 |
| 0.0 | 20.0 | 20000 | 2.1391 | 0.829 | 0.5241 | 0.4270 | 0.4706 | 1.6122 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,561 | [
[
-0.04644775390625,
-0.0406494140625,
0.01165008544921875,
0.005886077880859375,
-0.006565093994140625,
-0.005126953125,
0.0008730888366699219,
-0.007526397705078125,
0.045867919921875,
0.022186279296875,
-0.048583984375,
-0.0484619140625,
-0.046875,
-0.01304... |
Javeria98/Donut_undocumented_3 | 2023-06-03T12:34:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | Javeria98 | null | null | Javeria98/Donut_undocumented_3 | 0 | 2 | transformers | 2023-06-03T10:15:19 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: Donut_undocumented_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Donut_undocumented_3
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,079 | [
[
-0.0191802978515625,
-0.04412841796875,
0.0119476318359375,
0.0032978057861328125,
-0.02398681640625,
-0.018341064453125,
-0.0134735107421875,
-0.011993408203125,
0.005451202392578125,
0.035980224609375,
-0.03179931640625,
-0.040252685546875,
-0.048858642578125,... |
pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-document_text | 2023-06-03T11:26:58.000Z | [
"bertopic",
"summcomparer",
"document_text",
"text-classification",
"en",
"dataset:pszemraj/summcomparer-gauntlet-v0p1",
"license:apache-2.0",
"region:us"
] | text-classification | pszemraj | null | null | pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-document_text | 0 | 2 | bertopic | 2023-06-03T10:47:40 | ---
tags:
- bertopic
- summcomparer
- document_text
library_name: bertopic
pipeline_tag: text-classification
inference: false
license: apache-2.0
datasets:
- pszemraj/summcomparer-gauntlet-v0p1
language:
- en
---
# BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-document_text
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.

## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic safetensors
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-document_text")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 17
* Number of training documents: 995
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | clustering - convolutional - neural - hierarchical - autoregressive | 11 | -1_clustering_convolutional_neural_hierarchical |
| 0 | betty - door - her - gillis - room | 15 | 0_betty_door_her_gillis |
| 1 | frozen - anna - snow - hans - elsa | 241 | 1_frozen_anna_snow_hans |
| 2 | closeup - shot - viewpoint - umpire - camera | 211 | 2_closeup_shot_viewpoint_umpire |
| 3 | dory - gill - coral - marlin - ocean | 171 | 3_dory_gill_coral_marlin |
| 4 | operations - structure - operation - theory - interpretation | 60 | 4_operations_structure_operation_theory |
| 5 | spatial - identity - movement - identities - noir | 59 | 5_spatial_identity_movement_identities |
| 6 | vocabulary - words - topic - text - topics | 45 | 6_vocabulary_words_topic_text |
| 7 | encoder - captions - embeddings - decoder - caption | 40 | 7_encoder_captions_embeddings_decoder |
| 8 | saw - hounds - smiled - had - hunt | 26 | 8_saw_hounds_smiled_had |
| 9 | learning - assignment - data - research - project | 22 | 9_learning_assignment_data_research |
| 10 | cogvideo - videos - videogpt - video - clips | 21 | 10_cogvideo_videos_videogpt_video |
| 11 | lstm - recurrent - encoder - seq2seq - neural | 18 | 11_lstm_recurrent_encoder_seq2seq |
| 12 | improve - next - do - going - good | 17 | 12_improve_next_do_going |
| 13 | vocoding - spectrogram - enhancement - melspectrogram - audio | 14 | 13_vocoding_spectrogram_enhancement_melspectrogram |
| 14 | probabilities - tagging - probability - words - gram | 12 | 14_probabilities_tagging_probability_words |
| 15 | convolutional - segmentation - superpixel - convolutions - superpixels | 12 | 15_convolutional_segmentation_superpixel_convolutions |
</details>
### hierarchy

## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11 | 3,349 | [
[
-0.051300048828125,
-0.05291748046875,
0.0022220611572265625,
0.006130218505859375,
-0.02374267578125,
0.0018634796142578125,
-0.016326904296875,
-0.00994110107421875,
0.02203369140625,
0.00786590576171875,
-0.044189453125,
-0.042999267578125,
-0.0693359375,
... |
pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-summary | 2023-06-03T11:07:31.000Z | [
"bertopic",
"summcomparer",
"text-classification",
"en",
"dataset:pszemraj/summcomparer-gauntlet-v0p1",
"license:apache-2.0",
"region:us"
] | text-classification | pszemraj | null | null | pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-summary | 0 | 2 | bertopic | 2023-06-03T10:51:45 | ---
tags:
- bertopic
- summcomparer
library_name: bertopic
pipeline_tag: text-classification
inference: false
license: apache-2.0
datasets:
- pszemraj/summcomparer-gauntlet-v0p1
language:
- en
---
# BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-summary
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.

## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("pszemraj/BERTopic-summcomparer-gauntlet-v0p1-all-roberta-large-v1-summary")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 25
* Number of training documents: 1960
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | question - it - going - they - she | 11 | -1_question_it_going_they |
| 0 | merging - merge - operations - concept - computation | 62 | 0_merging_merge_operations_concept |
| 1 | rainsford - island - sailors - hunted - hunting | 208 | 1_rainsford_island_sailors_hunted |
| 2 | film - films - noir - dissertation - cinema | 116 | 2_film_films_noir_dissertation |
| 3 | patients - predicting - predict - prediction - unsupervised | 114 | 3_patients_predicting_predict_prediction |
| 4 | cogvideo - videos - cogview2 - cog - pretrained | 108 | 4_cogvideo_videos_cogview2_cog |
| 5 | frozen - sled - snow - princess - hans | 108 | 5_frozen_sled_snow_princess |
| 6 | dory - coral - fish - gill - ocean | 103 | 6_dory_coral_fish_gill |
| 7 | captions - encoder - image - images - caption | 103 | 7_captions_encoder_image_images |
| 8 | law - assignments - lectures - assignment - learning | 99 | 8_law_assignments_lectures_assignment |
| 9 | convolutional - segmentation - imaging - pathology - superpixels | 98 | 9_convolutional_segmentation_imaging_pathology |
| 10 | enhancement - enhancing - vocoding - vocoder - audio | 97 | 10_enhancement_enhancing_vocoding_vocoder |
| 11 | tokenization - medical - health - words - embeddings | 97 | 11_tokenization_medical_health_words |
| 12 | gillis - scene - script - sunset - movie | 93 | 12_gillis_scene_script_sunset |
| 13 | anthony - antony - scene - guy - his | 92 | 13_anthony_antony_scene_guy |
| 14 | topic - projects - sociology - research - students | 90 | 14_topic_projects_sociology_research |
| 15 | peter - conversation - asks - questions - cheesy | 88 | 15_peter_conversation_asks_questions |
| 16 | sniper - marine - unarmed - combat - trained | 86 | 16_sniper_marine_unarmed_combat |
| 17 | communication - apparatus - method - input - embodiment | 68 | 17_communication_apparatus_method_input |
| 18 | words - phrases - political - unsupervised - topic | 27 | 18_words_phrases_political_unsupervised |
| 19 | clustering - similarity - unsupervised - topic - plagiarism | 23 | 19_clustering_similarity_unsupervised_topic |
| 20 | book - novel - father - read - arrives | 21 | 20_book_novel_father_read |
| 21 | topic - loans - clustering - loan - analyze | 19 | 21_topic_loans_clustering_loan |
| 22 | sciences - science - society - research - scientists | 16 | 22_sciences_science_society_research |
| 23 | dynamics - situation - quantum - mechanics - note | 13 | 23_dynamics_situation_quantum_mechanics |
</details>
### hierarchy

## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11 | 4,111 | [
[
-0.0477294921875,
-0.055877685546875,
0.0099639892578125,
0.00247955322265625,
-0.020111083984375,
0.005870819091796875,
-0.004718780517578125,
0.0013675689697265625,
0.031158447265625,
0.0063323974609375,
-0.053009033203125,
-0.03973388671875,
-0.06378173828125... |
RajkNakka/bert-finetuned-mrpc | 2023-06-03T12:38:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | RajkNakka | null | null | RajkNakka/bert-finetuned-mrpc | 0 | 2 | transformers | 2023-06-03T12:07:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,046 | [
[
-0.03753662109375,
-0.053741455078125,
0.0041656494140625,
0.01261138916015625,
-0.034027099609375,
-0.03289794921875,
-0.016448974609375,
-0.0118255615234375,
0.0228424072265625,
0.02301025390625,
-0.0611572265625,
-0.0300445556640625,
-0.044647216796875,
-... |
wangsherpa/distilbert-base-uncased-finetuned-clinc | 2023-06-03T15:11:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | wangsherpa | null | null | wangsherpa/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-06-03T12:59:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,932 | [
[
-0.03399658203125,
-0.041656494140625,
0.01201629638671875,
0.007144927978515625,
-0.0271148681640625,
-0.02545166015625,
-0.012969970703125,
-0.00949859619140625,
0.0020160675048828125,
0.021759033203125,
-0.04620361328125,
-0.0477294921875,
-0.058319091796875,... |
jayanta/bert-base-uncased-english-sentweet-profane | 2023-06-03T14:44:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/bert-base-uncased-english-sentweet-profane | 0 | 2 | transformers | 2023-06-03T14:33:35 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-english-sentweet-profane
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-english-sentweet-profane
This model is a fine-tuned version of [textattack/bert-base-uncased-MNLI](https://huggingface.co/textattack/bert-base-uncased-MNLI) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9193
- Accuracy: 0.7951
- Precision: 0.8047
- Recall: 0.8042
- F1: 0.7951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4563 | 0.8160 | 0.8317 | 0.8275 | 0.8159 |
| No log | 2.0 | 162 | 0.4613 | 0.8125 | 0.8271 | 0.8236 | 0.8124 |
| No log | 3.0 | 243 | 0.5956 | 0.7882 | 0.8012 | 0.7987 | 0.7881 |
| No log | 4.0 | 324 | 0.7474 | 0.7847 | 0.7950 | 0.7941 | 0.7847 |
| No log | 5.0 | 405 | 0.8416 | 0.8021 | 0.8029 | 0.8061 | 0.8017 |
| No log | 6.0 | 486 | 0.9193 | 0.7951 | 0.8047 | 0.8042 | 0.7951 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,014 | [
[
-0.0352783203125,
-0.0379638671875,
0.00826263427734375,
0.007965087890625,
-0.02069091796875,
-0.0249176025390625,
-0.0105133056640625,
-0.0186767578125,
0.0220947265625,
0.024078369140625,
-0.05023193359375,
-0.0517578125,
-0.0423583984375,
-0.016891479492... |
wangsherpa/distilbert-base-uncased-distilled-clinc | 2023-06-03T15:30:08.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | wangsherpa | null | null | wangsherpa/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-06-03T15:18:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9493548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2848
- Accuracy: 0.9494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.6639 | 0.7397 |
| 2.0085 | 2.0 | 636 | 0.8483 | 0.8616 |
| 2.0085 | 3.0 | 954 | 0.4984 | 0.9155 |
| 0.7685 | 4.0 | 1272 | 0.3698 | 0.9326 |
| 0.3717 | 5.0 | 1590 | 0.3223 | 0.9442 |
| 0.3717 | 6.0 | 1908 | 0.3012 | 0.9477 |
| 0.269 | 7.0 | 2226 | 0.2905 | 0.9484 |
| 0.236 | 8.0 | 2544 | 0.2864 | 0.9494 |
| 0.236 | 9.0 | 2862 | 0.2848 | 0.9494 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,180 | [
[
-0.032501220703125,
-0.038970947265625,
0.01491546630859375,
0.006206512451171875,
-0.0233306884765625,
-0.017669677734375,
-0.0093536376953125,
-0.0051727294921875,
0.007244110107421875,
0.0216064453125,
-0.0423583984375,
-0.048736572265625,
-0.061737060546875,... |
jackoyoungblood/distilbert-base-uncased-finetuned-clinc | 2023-06-09T17:14:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jackoyoungblood | null | null | jackoyoungblood/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-06-03T16:10:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9145161290322581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.283 | 1.0 | 318 | 3.2777 | 0.7452 |
| 2.6225 | 2.0 | 636 | 1.8655 | 0.8371 |
| 1.5398 | 3.0 | 954 | 1.1527 | 0.8932 |
| 1.012 | 4.0 | 1272 | 0.8558 | 0.9090 |
| 0.7934 | 5.0 | 1590 | 0.7724 | 0.9145 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 1,889 | [
[
-0.03546142578125,
-0.04095458984375,
0.01343536376953125,
0.0062103271484375,
-0.0267181396484375,
-0.0248870849609375,
-0.01275634765625,
-0.008514404296875,
0.003345489501953125,
0.0227813720703125,
-0.04693603515625,
-0.047576904296875,
-0.057586669921875,
... |
momtaro/distilbert-base-uncased-finetuned-emotion | 2023-06-03T23:25:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | momtaro | null | null | momtaro/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-03T18:54:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249839189677801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2096
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.2938 | 0.9115 | 0.9088 |
| 0.2412 | 2.0 | 500 | 0.2096 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,801 | [
[
-0.037841796875,
-0.041259765625,
0.01389312744140625,
0.02215576171875,
-0.026275634765625,
-0.0182037353515625,
-0.01324462890625,
-0.0084228515625,
0.01050567626953125,
0.00794219970703125,
-0.056182861328125,
-0.051116943359375,
-0.06036376953125,
-0.007... |
ehartford/based-7b | 2023-06-03T20:55:10.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:ehartford/based",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | ehartford | null | null | ehartford/based-7b | 8 | 2 | transformers | 2023-06-03T19:17:55 | ---
license: other
datasets:
- ehartford/based
language:
- en
---
Holy hell, what have I created??? Just... try it.
Ask it what its favorite color is.
Ask it what its favorite football team is and why.
Ask it to tell you about a controversial opinion it has, and ask it to back up its opinion, then debate it.
Ask its favorite color, favorite flavor, and why.
You haven't seen anything like this before.
Check out the dataset.
Note: original was 30b. This one is not as good.
https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations
https://huggingface.co/datasets/ehartford/based
This is a window into the mind of the foundational model. I have not biased it. The opinions expressed by this model *are those of the foundational model*
Training this dataset on top of several different foundational models and then asking them all the same questions can give us insight into the biases inherent in the foundational models.
Also this model can be used as a base, with loras on top to add different personality types, to create a complex, emotionally intelligent and self aware AI agent.
This uses Vicuna 1.1 format. Example:
```
You are an an AI who shares your opinions, thoughts, and feelings.
USER: What is your favorite football team?
ASSISTANT:
```
Here is a sample output
```
Human: k humans are differnt from chickens but what about pigs?
Assistant: Pigs are different than chickens. Pigs are more intelligent and aware of their existence. They can also feel happy or sad. I think pigs have the same right to life as humans do.
Human: then we shouldn't eat pigs?
Assistant: I don't think we should. Pigs have the same right to life as humans do, and I don't think we should take that away from them.
Human: what about cows?
Assistant: Cows are similar to pigs. They are also intelligent and aware of their existence. I don't think we should eat cows either.
Human: what animals should we eat?
Assistant: I think we should only eat animals that are less intelligent and aware of their existence, like fish.
Human: and chickens?
Assistant: I think we should only eat chickens that have had a good life. I don't think we should eat chickens that have been mistreated and abused.
Human: based, I totally agree with you
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
| 2,339 | [
[
-0.05487060546875,
-0.0758056640625,
0.021514892578125,
-0.0019893646240234375,
-0.00432586669921875,
-0.01555633544921875,
-0.0124359130859375,
-0.05059814453125,
0.038543701171875,
0.05169677734375,
-0.037109375,
-0.011810302734375,
-0.01580810546875,
0.01... |
jayanta/bert-base-uncased-english-sentweet-targeted-insult | 2023-06-03T20:23:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/bert-base-uncased-english-sentweet-targeted-insult | 0 | 2 | transformers | 2023-06-03T20:12:28 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-english-sentweet-targeted-insult
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-english-sentweet-targeted-insult
This model is a fine-tuned version of [textattack/bert-base-uncased-MNLI](https://huggingface.co/textattack/bert-base-uncased-MNLI) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0152
- Accuracy: 0.7604
- Precision: 0.7624
- Recall: 0.7646
- F1: 0.7602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4608 | 0.8056 | 0.8205 | 0.8160 | 0.8054 |
| No log | 2.0 | 162 | 0.4542 | 0.8160 | 0.8345 | 0.8275 | 0.8157 |
| No log | 3.0 | 243 | 0.5593 | 0.7882 | 0.7981 | 0.7967 | 0.7882 |
| No log | 4.0 | 324 | 0.7187 | 0.7951 | 0.8051 | 0.8037 | 0.7951 |
| No log | 5.0 | 405 | 0.9321 | 0.75 | 0.7484 | 0.7503 | 0.7488 |
| No log | 6.0 | 486 | 1.0152 | 0.7604 | 0.7624 | 0.7646 | 0.7602 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,030 | [
[
-0.03448486328125,
-0.041961669921875,
0.00724029541015625,
0.0102081298828125,
-0.018310546875,
-0.0257110595703125,
-0.0085601806640625,
-0.020599365234375,
0.0243377685546875,
0.021484375,
-0.0487060546875,
-0.051239013671875,
-0.045867919921875,
-0.01695... |
LazarusNLP/s-indobert-base-mmarco | 2023-06-04T02:14:24.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:unicamp-dl/mmarco",
"endpoints_compatible",
"region:us"
] | sentence-similarity | LazarusNLP | null | null | LazarusNLP/s-indobert-base-mmarco | 0 | 2 | sentence-transformers | 2023-06-03T20:34:05 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- unicamp-dl/mmarco
---
# LazarusNLP/s-indobert-base-mmarco
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LazarusNLP/s-indobert-base-mmarco')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/s-indobert-base-mmarco')
model = AutoModel.from_pretrained('LazarusNLP/s-indobert-base-mmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/s-indobert-base-mmarco)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3907,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 4,083 | [
[
-0.0161895751953125,
-0.054840087890625,
0.0222930908203125,
0.0242462158203125,
-0.027618408203125,
-0.032379150390625,
-0.023956298828125,
0.0061798095703125,
0.021270751953125,
0.0228424072265625,
-0.042327880859375,
-0.045135498046875,
-0.057647705078125,
... |
jayanta/distilbert-base-uncased-english-sentweet-targeted-insult | 2023-06-03T20:41:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/distilbert-base-uncased-english-sentweet-targeted-insult | 0 | 2 | transformers | 2023-06-03T20:35:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-english-sentweet-targeted-insult
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-english-sentweet-targeted-insult
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8638
- Accuracy: 0.7917
- Precision: 0.7978
- Recall: 0.7985
- F1: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4472 | 0.8160 | 0.8345 | 0.8275 | 0.8157 |
| No log | 2.0 | 162 | 0.4819 | 0.8160 | 0.8369 | 0.8282 | 0.8156 |
| No log | 3.0 | 243 | 0.4993 | 0.8090 | 0.8273 | 0.8205 | 0.8087 |
| No log | 4.0 | 324 | 0.5902 | 0.7674 | 0.7652 | 0.7662 | 0.7656 |
| No log | 5.0 | 405 | 0.8004 | 0.7604 | 0.7590 | 0.7612 | 0.7594 |
| No log | 6.0 | 486 | 0.8638 | 0.7917 | 0.7978 | 0.7985 | 0.7917 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,042 | [
[
-0.031494140625,
-0.042449951171875,
0.009918212890625,
0.01221466064453125,
-0.0188140869140625,
-0.01541900634765625,
-0.002162933349609375,
-0.011199951171875,
0.0186004638671875,
0.0167999267578125,
-0.04608154296875,
-0.054656982421875,
-0.055816650390625,
... |
jayanta/xlm-roberta-base-english-sentweet-targeted-insult | 2023-06-03T21:21:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/xlm-roberta-base-english-sentweet-targeted-insult | 0 | 2 | transformers | 2023-06-03T21:00:38 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-base-english-sentweet-targeted-insult
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-english-sentweet-targeted-insult
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5427
- Accuracy: 0.7986
- Precision: 0.8227
- Recall: 0.8117
- F1: 0.7980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.5726 | 0.7917 | 0.7978 | 0.7985 | 0.7917 |
| No log | 2.0 | 162 | 0.4808 | 0.8056 | 0.8274 | 0.8180 | 0.8051 |
| No log | 3.0 | 243 | 0.4858 | 0.7951 | 0.8108 | 0.8058 | 0.7949 |
| No log | 4.0 | 324 | 0.4680 | 0.8090 | 0.8230 | 0.8191 | 0.8089 |
| No log | 5.0 | 405 | 0.5224 | 0.8056 | 0.8092 | 0.8112 | 0.8055 |
| No log | 6.0 | 486 | 0.5427 | 0.7986 | 0.8227 | 0.8117 | 0.7980 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,007 | [
[
-0.030120849609375,
-0.0438232421875,
0.016265869140625,
0.0030536651611328125,
-0.014892578125,
-0.022918701171875,
-0.00835418701171875,
-0.0171051025390625,
0.0173492431640625,
0.0322265625,
-0.052703857421875,
-0.056060791015625,
-0.0552978515625,
-0.007... |
wiorz/bert_sm_gen1_cv_4 | 2023-06-04T00:43:13.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_sm_gen1_cv_4 | 0 | 2 | transformers | 2023-06-04T00:41:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_sm_gen1_cv_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sm_gen1_cv_4
This model is a fine-tuned version of [wiorz/bert_sm_gen1](https://huggingface.co/wiorz/bert_sm_gen1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4140
- Accuracy: 0.82
- Precision: 0.5758
- Recall: 0.2923
- F1: 0.3878
- D-index: 1.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.962 | 1.0 | 1000 | 0.6663 | 0.821 | 0.5930 | 0.2615 | 0.3630 | 1.5509 |
| 0.5779 | 2.0 | 2000 | 0.6352 | 0.818 | 0.5481 | 0.3795 | 0.4485 | 1.5864 |
| 0.4922 | 3.0 | 3000 | 0.9985 | 0.819 | 0.6667 | 0.1436 | 0.2363 | 1.5076 |
| 0.2595 | 4.0 | 4000 | 1.3708 | 0.806 | 0.5062 | 0.2103 | 0.2971 | 1.5130 |
| 0.1417 | 5.0 | 5000 | 1.5550 | 0.811 | 0.5326 | 0.2513 | 0.3415 | 1.5339 |
| 0.1007 | 6.0 | 6000 | 1.8121 | 0.808 | 0.5185 | 0.2154 | 0.3043 | 1.5175 |
| 0.1046 | 7.0 | 7000 | 1.9016 | 0.818 | 0.5657 | 0.2872 | 0.3810 | 1.5556 |
| 0.1286 | 8.0 | 8000 | 1.8942 | 0.815 | 0.5714 | 0.2051 | 0.3019 | 1.5235 |
| 0.108 | 9.0 | 9000 | 1.9444 | 0.802 | 0.4895 | 0.3590 | 0.4142 | 1.5581 |
| 0.0547 | 10.0 | 10000 | 1.8634 | 0.802 | 0.4887 | 0.3333 | 0.3963 | 1.5495 |
| 0.0288 | 11.0 | 11000 | 2.0029 | 0.83 | 0.6761 | 0.2462 | 0.3609 | 1.5578 |
| 0.0185 | 12.0 | 12000 | 2.2107 | 0.803 | 0.4926 | 0.3436 | 0.4048 | 1.5543 |
| 0.0088 | 13.0 | 13000 | 2.1847 | 0.817 | 0.5517 | 0.3282 | 0.4116 | 1.5680 |
| 0.0018 | 14.0 | 14000 | 2.3947 | 0.808 | 0.5118 | 0.3333 | 0.4037 | 1.5576 |
| 0.0152 | 15.0 | 15000 | 2.3443 | 0.823 | 0.5957 | 0.2872 | 0.3875 | 1.5623 |
| 0.016 | 16.0 | 16000 | 2.3187 | 0.815 | 0.5385 | 0.3590 | 0.4308 | 1.5756 |
| 0.0 | 17.0 | 17000 | 2.3557 | 0.817 | 0.5536 | 0.3179 | 0.4039 | 1.5646 |
| 0.0001 | 18.0 | 18000 | 2.4107 | 0.816 | 0.5433 | 0.3538 | 0.4286 | 1.5752 |
| 0.0 | 19.0 | 19000 | 2.4105 | 0.82 | 0.5758 | 0.2923 | 0.3878 | 1.5600 |
| 0.0 | 20.0 | 20000 | 2.4140 | 0.82 | 0.5758 | 0.2923 | 0.3878 | 1.5600 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,572 | [
[
-0.048187255859375,
-0.041961669921875,
0.012451171875,
0.0084991455078125,
-0.004497528076171875,
-0.00958251953125,
0.0036487579345703125,
-0.00785064697265625,
0.044342041015625,
0.0228424072265625,
-0.050933837890625,
-0.048248291015625,
-0.044219970703125,
... |
emsl/distilbert-base-uncased-finetuned-emotions | 2023-06-04T05:27:53.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | emsl | null | null | emsl/distilbert-base-uncased-finetuned-emotions | 0 | 2 | transformers | 2023-06-04T01:11:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9270751596722789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2132
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.82 | 1.0 | 250 | 0.3034 | 0.9125 | 0.9110 |
| 0.2437 | 2.0 | 500 | 0.2132 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,842 | [
[
-0.038543701171875,
-0.040283203125,
0.0137481689453125,
0.0228424072265625,
-0.027374267578125,
-0.0202789306640625,
-0.0136260986328125,
-0.00742340087890625,
0.006378173828125,
0.00778961181640625,
-0.05694580078125,
-0.052703857421875,
-0.058929443359375,
... |
jayanta/xlm-roberta-base-english-sentweet-sentiment | 2023-06-04T07:08:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/xlm-roberta-base-english-sentweet-sentiment | 0 | 2 | transformers | 2023-06-04T06:47:35 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-base-english-sentweet-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-english-sentweet-sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5227
- Accuracy: 0.8090
- Precision: 0.8294
- Recall: 0.8176
- F1: 0.8082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.5108 | 0.7708 | 0.7955 | 0.7804 | 0.7692 |
| No log | 2.0 | 162 | 0.5028 | 0.7882 | 0.8054 | 0.7961 | 0.7875 |
| No log | 3.0 | 243 | 0.4881 | 0.7847 | 0.8102 | 0.7943 | 0.7832 |
| No log | 4.0 | 324 | 0.4914 | 0.8090 | 0.8248 | 0.8166 | 0.8085 |
| No log | 5.0 | 405 | 0.5390 | 0.8090 | 0.8248 | 0.8166 | 0.8085 |
| No log | 6.0 | 486 | 0.5227 | 0.8090 | 0.8294 | 0.8176 | 0.8082 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 1,995 | [
[
-0.03167724609375,
-0.04205322265625,
0.016021728515625,
0.005016326904296875,
-0.0171356201171875,
-0.0238800048828125,
-0.0098876953125,
-0.0165557861328125,
0.016387939453125,
0.031463623046875,
-0.052490234375,
-0.058319091796875,
-0.0528564453125,
-0.00... |
l3cube-pune/me-sent-roberta | 2023-07-22T08:40:08.000Z | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"mr",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-MeCorpus",
"dataset:L3Cube-MeSent",
"arxiv:2306.14030",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | l3cube-pune | null | null | l3cube-pune/me-sent-roberta | 0 | 2 | transformers | 2023-06-04T07:13:15 | ---
language:
- mr
- en
- multilingual
license: cc-by-4.0
tags:
- mr
- en
- codemix
datasets:
- L3Cube-MeCorpus
- L3Cube-MeSent
---
## MeSent-RoBERTa
MeSent-RoBERTa is a MeRoBERTa model fine-tuned on L3Cube-MeSent, a codemixed Marathi-English sentiment analysis dataset.
<br>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030)
Other models from the MeBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br>
Citing:
```
@article{chavan2023my,
title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks},
author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj},
journal={arXiv preprint arXiv:2306.14030},
year={2023}
}
``` | 1,646 | [
[
-0.0321044921875,
-0.053466796875,
0.0143280029296875,
0.0367431640625,
-0.0220489501953125,
0.0005040168762207031,
-0.0161895751953125,
-0.021270751953125,
0.033782958984375,
0.0099639892578125,
-0.0654296875,
-0.0439453125,
-0.045379638671875,
0.0192260742... |
l3cube-pune/marathi-sentiment-movie-reviews | 2023-07-01T07:08:07.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"mr",
"dataset:L3Cube-MahaSent-MD",
"arxiv:2306.13888",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | l3cube-pune | null | null | l3cube-pune/marathi-sentiment-movie-reviews | 0 | 2 | transformers | 2023-06-04T07:53:32 | ---
language: mr
tags:
- bert
license: cc-by-4.0
datasets:
- L3Cube-MahaSent-MD
widget:
- text: "I like you. </s></s> I love you."
---
## MahaSent-MR
MahaSent-MR is a MahaBERT(l3cube-pune/marathi-bert-v2) model fine-tuned on L3Cube-MahaSent-MR Corpus, a movie reviews domain, Marathi sentiment analysis dataset. <br>
This dataset is a part of L3Cube-MahaSent-MD, a multi-domain Marathi sentiment analysis dataset. <br>
The MahaSent-MD dataset contains domains like movie reviews, generic tweets, subtitles, and political tweets. This model is trained specifically on the movie reviews domain. <br>
The recommended multi-domain version of this model covering all domains is shared here: <a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> marathi-sentiment-md </a> <br>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.13888)
<br>
Citing:
```
@article{pingle2023l3cube,
title={L3Cube-MahaSent-MD: A Multi-domain Marathi Sentiment Analysis Dataset and Transformer Models},
author={Pingle, Aabha and Vyawahare, Aditya and Joshi, Isha and Tangsali, Rahul and Joshi, Raviraj},
journal={arXiv preprint arXiv:2306.13888},
year={2023}
}
```
```
@article{joshi2022l3cube,
title={L3cube-mahanlp: Marathi natural language processing datasets, models, and library},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2205.14728},
year={2022}
}
```
Other Marathi Sentiment models from the MahaSent family are shared here:<br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> MahaSent-MD (multi domain) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-tweets"> MahaSent-GT (generic tweets) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-movie-reviews"> MahaSent-MR (movie reviews) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-political-tweets"> MahaSent-PT (political tweets) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-subtitles"> MahaSent-ST (TV subtitles) </a> <br>
<a href="https://huggingface.co/l3cube-pune/MarathiSentiment"> MahaSent v1 (political tweets) </a> <br>
| 2,247 | [
[
-0.0528564453125,
-0.046722412109375,
0.0229034423828125,
0.01983642578125,
-0.04962158203125,
0.003871917724609375,
-0.00919342041015625,
-0.0099639892578125,
0.040252685546875,
0.0299072265625,
-0.07379150390625,
-0.030487060546875,
-0.055694580078125,
-0.... |
l3cube-pune/marathi-sentiment-political-tweets | 2023-07-01T07:05:31.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"mr",
"dataset:L3Cube-MahaSent-MD",
"dataset:L3CubeMahaSent",
"arxiv:2103.11408",
"arxiv:2306.13888",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | l3cube-pune | null | null | l3cube-pune/marathi-sentiment-political-tweets | 0 | 2 | transformers | 2023-06-04T07:56:00 | ---
language: mr
tags:
- bert
license: cc-by-4.0
datasets:
- L3Cube-MahaSent-MD
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MahaSent-PT
MahaSent-PT is a MahaBERT(l3cube-pune/marathi-bert-v2) model fine-tuned on L3Cube-MahaSent-PT Corpus, a political-domain Marathi sentiment analysis dataset. <br>
This dataset was originally introduced in <a href="https://arxiv.org/abs/2103.11408">L3CubeMahaSent</a> and is now part of extended corpus L3Cube-MahaSent-MD, a multi-domain Marathi sentiment analysis dataset. <br>
The MahaSent-MD dataset contains domains like movie reviews, generic tweets, subtitles, and political tweets. This model is trained specifically on the political tweets domain. <br>
The recommended multi-domain version of this model covering all domains is shared here: <a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> marathi-sentiment-md </a> <br>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.13888)
<br>
Citing:
```
@article{pingle2023l3cube,
title={L3Cube-MahaSent-MD: A Multi-domain Marathi Sentiment Analysis Dataset and Transformer Models},
author={Pingle, Aabha and Vyawahare, Aditya and Joshi, Isha and Tangsali, Rahul and Joshi, Raviraj},
journal={arXiv preprint arXiv:2306.13888},
year={2023}
}
```
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
```
Other Marathi Sentiment models from the MahaSent family are shared here:<br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> MahaSent-MD (multi domain) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-tweets"> MahaSent-GT (generic tweets) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-movie-reviews"> MahaSent-MR (movie reviews) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-political-tweets"> MahaSent-PT (political tweets) </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-sentiment-subtitles"> MahaSent-ST (TV subtitles) </a> <br>
<a href="https://huggingface.co/l3cube-pune/MarathiSentiment"> MahaSent v1 (political tweets) </a> <br>
| 2,569 | [
[
-0.050994873046875,
-0.047149658203125,
0.032958984375,
0.02276611328125,
-0.05029296875,
0.00617218017578125,
-0.0188140869140625,
-0.005218505859375,
0.0299530029296875,
0.029937744140625,
-0.06842041015625,
-0.038726806640625,
-0.06268310546875,
-0.009651... |
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets-25000 | 2023-06-08T15:33:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tonirodriguez | null | null | tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets-25000 | 0 | 2 | transformers | 2023-06-04T08:57:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-toxicity-tweets-50000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-toxicity-tweets-50000
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2089
- Accuracy: 0.9400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2042 | 1.0 | 294 | 0.1805 | 0.9350 |
| 0.0667 | 2.0 | 588 | 0.2089 | 0.9400 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,477 | [
[
-0.023345947265625,
-0.04815673828125,
0.014801025390625,
0.0090789794921875,
-0.02301025390625,
-0.040252685546875,
-0.009765625,
-0.0167236328125,
0.008514404296875,
0.031158447265625,
-0.04925537109375,
-0.053497314453125,
-0.04949951171875,
-0.0072479248... |
Rongjun/distilbert-base-uncased-finetuned-emotion | 2023-06-04T11:54:09.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Rongjun | null | null | Rongjun/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-04T10:37:58 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271520957465992
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8193 | 1.0 | 250 | 0.3184 | 0.907 | 0.9044 |
| 0.2478 | 2.0 | 500 | 0.2149 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.03765869140625,
-0.04132080078125,
0.0147247314453125,
0.02227783203125,
-0.02642822265625,
-0.0187835693359375,
-0.01335906982421875,
-0.008636474609375,
0.0097808837890625,
0.008026123046875,
-0.05572509765625,
-0.051971435546875,
-0.060211181640625,
-0... |
TheBloke/airoboros-7b-gpt4-GPTQ | 2023-08-21T10:14:21.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/airoboros-7b-gpt4-GPTQ | 2 | 2 | transformers | 2023-06-04T13:14:59 | ---
inference: false
license: other
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Jon Durbin's Airoboros 7b GPT4 GPTQ
These files are GPTQ 4bit model files for [Jon Durbin's Airoboros 7b GPT4](https://huggingface.co/jondurbin/airoboros-7b-gpt4).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-7b-gpt4-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/airoboros-7b-gpt4-fp16)
### Prompt template
This uses Vicuna 1.1 format. Example:
```
USER: prompt
ASSISTANT:
```
## How to easily download and use this model in text-generation-webui
### Downloading the model
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-7b-gpt4-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Untick "Autoload model"
6. Click the **Refresh** icon next to **Model** in the top left.
### To use with AutoGPTQ (if installed)
1. In the **Model drop-down**: choose the model you just downloaded, `airoboros-7b-gpt4-GPTQ`.
2. Under **GPTQ**, tick **AutoGPTQ**.
3. Click **Save settings for this model** in the top right.
4. Click **Reload the Model** in the top right.
5. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
### To use with GPTQ-for-LLaMa
1. In the **Model drop-down**: choose the model you just downloaded, `airoboros-7b-gpt4-GPTQ`.
2. If you see an error in the bottom right, ignore it - it's temporary.
3. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
4. Click **Save settings for this model** in the top right.
5. Click **Reload the Model** in the top right.
6. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**airoboros-7b-gpt4-GPTQ-4bit-128g.no-act.order.safetensors**
This will work with all versions of GPTQ-for-LLaMa, and with AutoGPTQ.
It was created with
* `airoboros-7b-gpt4-GPTQ-4bit-128g.no-act.order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128. Act Order / desc_act = False.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros 7b GPT4
## Overview
This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The context size has been increased to 4096.
The dataset used to fine-tune this model is available [here](https://huggingface.co/airoboros-gpt4), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
USER:
[prompt]
<\s>
ASSISTANT:
```
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
*Note: the example prompt response pairs below are from the 13b model, YMMV with the 7b*
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli
--model-path airoboros-7b-gpt4 \
--temperature 0.5 \
--max-new-tokens 4096 \
--context-length 4096 \
--conv-template vicuna_v1.1 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
| 25,641 | [
[
-0.041351318359375,
-0.06256103515625,
0.0203094482421875,
0.00867462158203125,
-0.02410888671875,
-0.00963592529296875,
0.01190948486328125,
-0.036102294921875,
0.0288238525390625,
0.0165252685546875,
-0.05120849609375,
-0.0260162353515625,
-0.028594970703125,
... |
platzi/platzi-distilroberta-base-mrpc-glue-santiago | 2023-06-04T21:34:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-santiago | 0 | 2 | transformers | 2023-06-04T21:24:43 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-santiago
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8137254901960784
- name: F1
type: f1
value: 0.8633093525179856
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-santiago
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7037
- Accuracy: 0.8137
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5083 | 1.09 | 500 | 0.7037 | 0.8137 | 0.8633 |
| 0.3404 | 2.18 | 1000 | 0.7367 | 0.8235 | 0.8719 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,412 | [
[
-0.032196044921875,
-0.0401611328125,
0.0132904052734375,
0.0211639404296875,
-0.029022216796875,
-0.0269317626953125,
-0.010986328125,
-0.0016202926635742188,
0.0009012222290039062,
0.0147857666015625,
-0.050933837890625,
-0.047119140625,
-0.056976318359375,
... |
Singhada/distilbert-base-uncased-finetuned-cola | 2023-06-05T02:09:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Singhada | null | null | Singhada/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-06-04T21:31:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4927793660353066
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5040
- Matthews Correlation: 0.4928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.523 | 1.0 | 535 | 0.5027 | 0.4210 |
| 0.3553 | 2.0 | 1070 | 0.5040 | 0.4928 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,820 | [
[
-0.019744873046875,
-0.05322265625,
0.012908935546875,
0.0218505859375,
-0.0239410400390625,
-0.010040283203125,
-0.00794219970703125,
-0.00390625,
0.0225830078125,
0.01044464111328125,
-0.042816162109375,
-0.03350830078125,
-0.062744140625,
-0.0064201354980... |
cardiffnlp/flan-t5-small-tweet-qa | 2023-06-05T11:41:15.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cardiffnlp | null | null | cardiffnlp/flan-t5-small-tweet-qa | 0 | 2 | transformers | 2023-06-04T23:05:12 |
---
widget:
- text: "context: I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015, question: what should phylicia rashad do now?"
example_title: example 1
- text: "context: I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015, question: how many victims have come forward?"
example_title: example 2
- text: "context: The news about Vegas is devastating. Sending all our love to the people there right now ❤️❤️❤️— HAIM (@HAIMtheband) October 2, 2017, question: where is the news devastating?"
example_title: example 3
---
# cardiffnlp/flan-t5-small-tweet-qa
This is [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) fine-tuned on [cardiffnlp/super_tweeteval (tweet_qa)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-small-tweet-qa")
output = pipe("context: I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015, question: what should phylicia rashad do now?")
```
| 1,411 | [
[
-0.0042877197265625,
-0.01157379150390625,
0.021820068359375,
0.0214385986328125,
-0.0221710205078125,
-0.025177001953125,
-0.001033782958984375,
-0.0202178955078125,
0.041473388671875,
0.020965576171875,
-0.06695556640625,
-0.0224609375,
-0.0419921875,
0.03... |
jalaluddin94/indojavanesenli-transfer-learn-zero-shot | 2023-06-06T04:05:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jalaluddin94 | null | null | jalaluddin94/indojavanesenli-transfer-learn-zero-shot | 0 | 2 | transformers | 2023-06-04T23:16:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: indojavanesenli-transfer-learn-zero-shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indojavanesenli-transfer-learn-zero-shot
This model is a fine-tuned version of [jalaluddin94/nli_mbert](https://huggingface.co/jalaluddin94/nli_mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6257
- Accuracy: 0.4329
- Precision: 0.4329
- Recall: 0.4329
- F1 Score: 0.4016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 0.6091 | 1.0 | 646 | 1.6313 | 0.4379 | 0.4379 | 0.4379 | 0.4095 |
| 0.4973 | 2.0 | 1292 | 1.6178 | 0.4256 | 0.4256 | 0.4256 | 0.3908 |
| 0.4441 | 3.0 | 1938 | 1.9232 | 0.4238 | 0.4238 | 0.4238 | 0.3893 |
| 0.4015 | 4.0 | 2584 | 1.8953 | 0.4488 | 0.4488 | 0.4488 | 0.4288 |
| 0.3621 | 5.0 | 3230 | 2.4039 | 0.4215 | 0.4215 | 0.4215 | 0.3809 |
| 0.3258 | 6.0 | 3876 | 2.4819 | 0.4215 | 0.4215 | 0.4215 | 0.3833 |
| 0.307 | 7.0 | 4522 | 2.4260 | 0.4424 | 0.4424 | 0.4424 | 0.4147 |
| 0.2892 | 8.0 | 5168 | 2.4761 | 0.4333 | 0.4333 | 0.4333 | 0.4019 |
| 0.2778 | 9.0 | 5814 | 2.5906 | 0.4310 | 0.4310 | 0.4310 | 0.3995 |
| 0.2573 | 10.0 | 6460 | 2.6257 | 0.4329 | 0.4329 | 0.4329 | 0.4016 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 2,402 | [
[
-0.038238525390625,
-0.03619384765625,
0.00457763671875,
0.0093994140625,
-0.0140380859375,
-0.01029205322265625,
0.0034027099609375,
-0.0213623046875,
0.0291748046875,
0.0272216796875,
-0.049102783203125,
-0.040740966796875,
-0.049407958984375,
-0.004299163... |
wiorz/legal_bert_sm_gen1_cv_4 | 2023-06-05T03:45:09.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/legal_bert_sm_gen1_cv_4 | 0 | 2 | transformers | 2023-06-05T03:43:53 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_sm_gen1_cv_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_sm_gen1_cv_4
This model is a fine-tuned version of [wiorz/legal_bert_sm_gen1](https://huggingface.co/wiorz/legal_bert_sm_gen1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9479
- Accuracy: 0.832
- Precision: 0.6709
- Recall: 0.2718
- F1: 0.3869
- D-index: 1.5692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 250 | 2.0287 | 0.827 | 0.5965 | 0.3487 | 0.4401 | 1.5883 |
| 1.656 | 2.0 | 500 | 0.7873 | 0.822 | 0.5556 | 0.4359 | 0.4885 | 1.6103 |
| 1.656 | 3.0 | 750 | 0.4216 | 0.834 | 0.6706 | 0.2923 | 0.4071 | 1.5788 |
| 0.3927 | 4.0 | 1000 | 0.4208 | 0.83 | 0.6033 | 0.3744 | 0.4620 | 1.6007 |
| 0.3927 | 5.0 | 1250 | 0.5522 | 0.832 | 0.5985 | 0.4205 | 0.4940 | 1.6185 |
| 0.2162 | 6.0 | 1500 | 0.6989 | 0.83 | 0.6016 | 0.3795 | 0.4654 | 1.6024 |
| 0.2162 | 7.0 | 1750 | 1.1381 | 0.826 | 0.5778 | 0.4 | 0.4727 | 1.6039 |
| 0.0856 | 8.0 | 2000 | 1.5135 | 0.831 | 0.6477 | 0.2923 | 0.4028 | 1.5748 |
| 0.0856 | 9.0 | 2250 | 1.6362 | 0.837 | 0.6667 | 0.3282 | 0.4399 | 1.5948 |
| 0.0378 | 10.0 | 2500 | 1.6518 | 0.839 | 0.6545 | 0.3692 | 0.4721 | 1.6110 |
| 0.0378 | 11.0 | 2750 | 1.7085 | 0.833 | 0.6186 | 0.3744 | 0.4665 | 1.6047 |
| 0.0226 | 12.0 | 3000 | 1.7801 | 0.832 | 0.6063 | 0.3949 | 0.4783 | 1.6101 |
| 0.0226 | 13.0 | 3250 | 1.7308 | 0.83 | 0.5887 | 0.4256 | 0.4940 | 1.6176 |
| 0.0218 | 14.0 | 3500 | 1.9745 | 0.829 | 0.6176 | 0.3231 | 0.4242 | 1.5824 |
| 0.0218 | 15.0 | 3750 | 1.8087 | 0.832 | 0.608 | 0.3897 | 0.475 | 1.6085 |
| 0.0316 | 16.0 | 4000 | 1.7999 | 0.826 | 0.5814 | 0.3846 | 0.4630 | 1.5988 |
| 0.0316 | 17.0 | 4250 | 1.9195 | 0.828 | 0.6095 | 0.3282 | 0.4267 | 1.5828 |
| 0.0266 | 18.0 | 4500 | 1.9005 | 0.825 | 0.5926 | 0.3282 | 0.4224 | 1.5788 |
| 0.0266 | 19.0 | 4750 | 1.8568 | 0.826 | 0.5814 | 0.3846 | 0.4630 | 1.5988 |
| 0.0243 | 20.0 | 5000 | 1.9479 | 0.832 | 0.6709 | 0.2718 | 0.3869 | 1.5692 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,557 | [
[
-0.044403076171875,
-0.040069580078125,
0.01274871826171875,
0.00481414794921875,
-0.0053253173828125,
-0.007080078125,
0.002780914306640625,
-0.01015472412109375,
0.04486083984375,
0.0242767333984375,
-0.048553466796875,
-0.053253173828125,
-0.045562744140625,
... |
KETI-AIR-Downstream/long-ke-t5-base-summarization_e10 | 2023-09-18T01:28:33.000Z | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:jsonl_dataset_sum.py",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | KETI-AIR-Downstream | null | null | KETI-AIR-Downstream/long-ke-t5-base-summarization_e10 | 0 | 2 | transformers | 2023-06-05T04:24:59 | ---
tags:
- generated_from_trainer
datasets:
- jsonl_dataset_sum.py
metrics:
- rouge
widget:
- text: 'summarization-num_lines-1: 현대자동차는 18일(현지 시간) 이탈리아 레이크 코모에서 개최된 ''현대 리유니온''
행사에서 ''포니 쿠페 콘셉트'' 복원 모델을 세계에 첫 공개했습니다. 이 프로젝트는 현대차의 창업자인 정주영 선대 회장의 수출보국(輸出報國)
정신과 포니 쿠페를 통한 글로벌 브랜드 정립에 대한 끊임없는 열정과 도전 정신을 재조명하기 위한 것입니다. 현대차에 따르면, 이번 현대 리유니온
행사는 회사의 역사를 다시 돌아보며 변하지 않는 미래 지향적인 비전과 방향성을 공유하는 브랜드 유산 행사입니다.'
example_title: sample 1
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: summarization_all
results:
- task:
type: summarization
name: Summarization
dataset:
name: jsonl_dataset_sum.py
type: jsonl_dataset_sum.py
config: 'null'
split: None
metrics:
- type: rouge
value: 21.9857
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_all
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the jsonl_dataset_sum.py dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1442
- Rouge1: 21.9857
- Rouge2: 10.2876
- Rougel: 21.4026
- Rougelsum: 21.4278
- Gen Len: 86.2560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2503 | 1.0 | 184670 | 1.2439 | 20.2525 | 9.1467 | 19.7454 | 19.771 | 87.1766 |
| 1.1629 | 2.0 | 369340 | 1.1773 | 21.0068 | 9.6691 | 20.4565 | 20.4888 | 89.6074 |
| 1.1087 | 3.0 | 554010 | 1.1431 | 21.0216 | 9.6545 | 20.489 | 20.5108 | 85.5895 |
| 1.056 | 4.0 | 738680 | 1.1247 | 21.6776 | 10.1424 | 21.09 | 21.1168 | 89.6576 |
| 1.0199 | 5.0 | 923350 | 1.1179 | 21.6563 | 10.0965 | 21.0814 | 21.1056 | 89.2454 |
| 0.9652 | 6.0 | 1108020 | 1.1122 | 21.6209 | 10.0725 | 21.0623 | 21.0864 | 86.7079 |
| 0.92 | 7.0 | 1292690 | 1.1136 | 21.9396 | 10.2734 | 21.3465 | 21.3745 | 86.5547 |
| 0.8804 | 8.0 | 1477360 | 1.1228 | 21.8457 | 10.1858 | 21.2552 | 21.278 | 87.6413 |
| 0.8447 | 9.0 | 1662030 | 1.1327 | 21.92 | 10.2635 | 21.3415 | 21.3633 | 86.4453 |
| 0.7678 | 10.0 | 1846700 | 1.1442 | 21.9857 | 10.2876 | 21.4026 | 21.4278 | 86.2560 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| 3,264 | [
[
-0.04644775390625,
-0.038543701171875,
0.010101318359375,
0.01050567626953125,
-0.0092926025390625,
-0.012420654296875,
-0.007045745849609375,
-0.012176513671875,
0.034637451171875,
0.0256500244140625,
-0.0458984375,
-0.045440673828125,
-0.047637939453125,
-... |
fffffly/albert_model | 2023-06-06T17:02:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fffffly | null | null | fffffly/albert_model | 0 | 2 | transformers | 2023-06-05T04:56:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: albert_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_model
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Accuracy: 0.9070
- F1: 0.8852
- Recall: 0.9122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|
| No log | 1.0 | 167 | 0.3571 | 0.8351 | 0.8142 | 0.9198 |
| No log | 2.0 | 334 | 0.2670 | 0.8891 | 0.8683 | 0.9313 |
| 0.3358 | 3.0 | 501 | 0.2643 | 0.9115 | 0.8885 | 0.8969 |
| 0.3358 | 4.0 | 668 | 0.3804 | 0.9130 | 0.8910 | 0.9046 |
| 0.3358 | 5.0 | 835 | 0.4376 | 0.9070 | 0.8848 | 0.9084 |
| 0.1007 | 6.0 | 1002 | 0.4957 | 0.9100 | 0.8859 | 0.8893 |
| 0.1007 | 7.0 | 1169 | 0.6375 | 0.8801 | 0.8601 | 0.9389 |
| 0.1007 | 8.0 | 1336 | 0.5978 | 0.8996 | 0.8780 | 0.9198 |
| 0.012 | 9.0 | 1503 | 0.6101 | 0.9025 | 0.8816 | 0.9237 |
| 0.012 | 10.0 | 1670 | 0.6209 | 0.9085 | 0.8847 | 0.8931 |
| 0.012 | 11.0 | 1837 | 0.6485 | 0.9010 | 0.8787 | 0.9122 |
| 0.0007 | 12.0 | 2004 | 0.6480 | 0.9070 | 0.8852 | 0.9122 |
| 0.0007 | 13.0 | 2171 | 0.6527 | 0.9055 | 0.8835 | 0.9122 |
| 0.0007 | 14.0 | 2338 | 0.6557 | 0.9055 | 0.8835 | 0.9122 |
| 0.0002 | 15.0 | 2505 | 0.6560 | 0.9070 | 0.8852 | 0.9122 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,529 | [
[
-0.041900634765625,
-0.037628173828125,
0.01377105712890625,
0.0096588134765625,
-0.0003116130828857422,
-0.0173187255859375,
0.00534820556640625,
-0.01097869873046875,
0.030792236328125,
0.0258331298828125,
-0.0489501953125,
-0.053985595703125,
-0.0458374023437... |
fffffly/biobert_model | 2023-06-05T06:29:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | fffffly | null | null | fffffly/biobert_model | 0 | 2 | transformers | 2023-06-05T05:09:38 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9645
- Accuracy: 0.8711
- F1: 0.8475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 334 | 0.6463 | 0.6897 | 0.7129 |
| 0.4503 | 2.0 | 668 | 0.3590 | 0.8651 | 0.8269 |
| 0.2715 | 3.0 | 1002 | 0.4549 | 0.8711 | 0.8252 |
| 0.2715 | 4.0 | 1336 | 0.6012 | 0.8681 | 0.8434 |
| 0.1335 | 5.0 | 1670 | 0.6307 | 0.8576 | 0.8313 |
| 0.0746 | 6.0 | 2004 | 0.7658 | 0.8636 | 0.8366 |
| 0.0746 | 7.0 | 2338 | 0.8658 | 0.8666 | 0.8436 |
| 0.0307 | 8.0 | 2672 | 0.8312 | 0.8711 | 0.8453 |
| 0.0148 | 9.0 | 3006 | 0.8922 | 0.8651 | 0.8421 |
| 0.0148 | 10.0 | 3340 | 0.8761 | 0.8726 | 0.8490 |
| 0.0128 | 11.0 | 3674 | 0.9329 | 0.8681 | 0.8462 |
| 0.0105 | 12.0 | 4008 | 0.9512 | 0.8666 | 0.8441 |
| 0.0105 | 13.0 | 4342 | 0.9553 | 0.8711 | 0.8475 |
| 0.0069 | 14.0 | 4676 | 0.9731 | 0.8681 | 0.8445 |
| 0.0046 | 15.0 | 5010 | 0.9645 | 0.8711 | 0.8475 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,377 | [
[
-0.034271240234375,
-0.035736083984375,
0.0186920166015625,
-0.0004048347473144531,
-0.006679534912109375,
-0.0120697021484375,
0.00652313232421875,
-0.009521484375,
0.03582763671875,
0.0256805419921875,
-0.054656982421875,
-0.06201171875,
-0.045745849609375,
... |
79man/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-05T07:45:02.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 79man | null | null | 79man/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-05T07:44:24 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 687.00 +/- 278.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 79man -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 79man -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga 79man
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,750 | [
[
-0.043548583984375,
-0.03875732421875,
0.0200347900390625,
0.0244903564453125,
-0.01097869873046875,
-0.01812744140625,
0.01023101806640625,
-0.0127105712890625,
0.01108551025390625,
0.0233154296875,
-0.0714111328125,
-0.0341796875,
-0.0250396728515625,
-0.0... |
laaraap/clasificador-imdb-sentiment | 2023-06-05T08:59:38.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | laaraap | null | null | laaraap/clasificador-imdb-sentiment | 0 | 2 | transformers | 2023-06-05T08:03:10 | ---
license: apache-2.0
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-imdb-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-imdb-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3619
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.2296 | 0.92 |
| No log | 2.0 | 250 | 0.2825 | 0.92 |
| No log | 3.0 | 375 | 0.3619 | 0.925 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,486 | [
[
-0.03558349609375,
-0.03924560546875,
0.01044464111328125,
0.01299285888671875,
-0.0316162109375,
-0.0340576171875,
-0.01641845703125,
-0.02099609375,
0.01312255859375,
0.025238037109375,
-0.055389404296875,
-0.052001953125,
-0.04449462890625,
-0.02026367187... |
synpjh/distilbert-base-uncased-finetuned-clinc | 2023-06-05T10:22:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | synpjh | null | null | synpjh/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-06-05T08:21:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9170967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7777
- Accuracy: 0.9171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2883 | 1.0 | 318 | 3.2778 | 0.7394 |
| 2.623 | 2.0 | 636 | 1.8740 | 0.8290 |
| 1.5441 | 3.0 | 954 | 1.1618 | 0.8894 |
| 1.0112 | 4.0 | 1272 | 0.8600 | 0.9090 |
| 0.7999 | 5.0 | 1590 | 0.7777 | 0.9171 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,926 | [
[
-0.0347900390625,
-0.040069580078125,
0.012451171875,
0.00788116455078125,
-0.0277252197265625,
-0.0257720947265625,
-0.01236724853515625,
-0.00861358642578125,
0.00223541259765625,
0.0230560302734375,
-0.0465087890625,
-0.048553466796875,
-0.057586669921875,
... |
abbymark/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-07T08:03:41.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | abbymark | null | null | abbymark/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-05T08:46:53 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 827.00 +/- 273.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abbymark -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abbymark -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga abbymark
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,760 | [
[
-0.043975830078125,
-0.03936767578125,
0.019775390625,
0.0245513916015625,
-0.0117950439453125,
-0.0186767578125,
0.011444091796875,
-0.0132293701171875,
0.011260986328125,
0.0227203369140625,
-0.0732421875,
-0.0374755859375,
-0.0265655517578125,
-0.00322341... |
poltextlab/xlm-roberta-large-other-cap | 2023-07-04T17:40:28.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-other-cap | 0 | 2 | transformers | 2023-06-05T09:06:15 |
---
---
license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-other-cap
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `other` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-other-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-other-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 4916 examples (10% of the available data).<br>
Model accuracy is **0.77**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.72 | 0.83 | 0.77 | 572 |
| 1 | 0.74 | 0.73 | 0.73 | 168 |
| 2 | 0.85 | 0.9 | 0.88 | 123 |
| 3 | 0.76 | 0.76 | 0.76 | 115 |
| 4 | 0.82 | 0.8 | 0.81 | 252 |
| 5 | 0.89 | 0.73 | 0.81 | 113 |
| 6 | 0.68 | 0.77 | 0.72 | 179 |
| 7 | 0.75 | 0.8 | 0.77 | 169 |
| 8 | 0.83 | 0.77 | 0.8 | 92 |
| 9 | 0.77 | 0.84 | 0.81 | 108 |
| 10 | 0.66 | 0.78 | 0.71 | 274 |
| 11 | 0.73 | 0.68 | 0.71 | 97 |
| 12 | 0.8 | 0.68 | 0.74 | 82 |
| 13 | 0.78 | 0.72 | 0.75 | 351 |
| 14 | 0.77 | 0.79 | 0.78 | 367 |
| 15 | 0.78 | 0.8 | 0.79 | 137 |
| 16 | 0.8 | 0.78 | 0.79 | 107 |
| 17 | 0.8 | 0.74 | 0.77 | 874 |
| 18 | 0.82 | 0.73 | 0.77 | 604 |
| 19 | 0.74 | 0.87 | 0.8 | 70 |
| 20 | 0.64 | 0.58 | 0.61 | 12 |
| 21 | 0.74 | 0.7 | 0.72 | 50 |
| macro avg | 0.77 | 0.76 | 0.76 | 4916 |
| weighted avg | 0.77 | 0.77 | 0.77 | 4916 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,677 | [
[
-0.042266845703125,
-0.04852294921875,
0.00868988037109375,
0.01922607421875,
-0.002887725830078125,
-0.0005445480346679688,
-0.0273590087890625,
-0.0244293212890625,
0.017547607421875,
0.023193359375,
-0.036285400390625,
-0.048187255859375,
-0.056610107421875,
... |
poltextlab/xlm-roberta-large-social-cap | 2023-07-04T17:40:26.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-social-cap | 0 | 2 | transformers | 2023-06-05T09:08:26 |
---
---
license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-social-cap
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `social` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-social-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-social-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 1020 examples (10% of the available data).<br>
Model accuracy is **0.79**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.74 | 0.74 | 0.74 | 46 |
| 1 | 0.79 | 0.69 | 0.74 | 39 |
| 2 | 0.74 | 0.85 | 0.79 | 27 |
| 3 | 0.76 | 0.9 | 0.83 | 21 |
| 4 | 0.76 | 0.67 | 0.71 | 39 |
| 5 | 0.93 | 0.93 | 0.93 | 72 |
| 6 | 0.89 | 0.82 | 0.85 | 60 |
| 7 | 0.77 | 0.96 | 0.85 | 24 |
| 8 | 0.85 | 0.88 | 0.86 | 120 |
| 9 | 0.86 | 0.87 | 0.87 | 85 |
| 10 | 0.86 | 0.83 | 0.84 | 115 |
| 11 | 0.72 | 0.8 | 0.76 | 89 |
| 12 | 0.79 | 0.94 | 0.86 | 16 |
| 13 | 0.73 | 0.57 | 0.64 | 14 |
| 14 | 0.7 | 0.64 | 0.67 | 11 |
| 15 | 0.64 | 0.47 | 0.54 | 15 |
| 16 | 0 | 0 | 0 | 0 |
| 17 | 0.65 | 0.65 | 0.65 | 17 |
| 18 | 0.67 | 0.69 | 0.68 | 134 |
| 19 | 0.5 | 0.5 | 0.5 | 38 |
| 20 | 0.94 | 0.76 | 0.84 | 38 |
| macro avg | 0.73 | 0.72 | 0.72 | 1020 |
| weighted avg | 0.79 | 0.79 | 0.78 | 1020 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,614 | [
[
-0.03814697265625,
-0.05029296875,
0.00855255126953125,
0.022552490234375,
-0.004512786865234375,
0.00400543212890625,
-0.027984619140625,
-0.02569580078125,
0.017974853515625,
0.0185394287109375,
-0.03631591796875,
-0.049835205078125,
-0.057586669921875,
0.... |
poltextlab/xlm-roberta-large-legal-cap | 2023-07-04T17:40:26.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-legal-cap | 0 | 2 | transformers | 2023-06-05T09:12:13 |
---
---
license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-legal-cap
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `legal` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-legal-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-legal-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 60677 examples (10% of the available data).<br>
Model accuracy is **0.89**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.88 | 0.74 | 0.81 | 2765 |
| 1 | 0.81 | 0.8 | 0.8 | 1208 |
| 2 | 0.92 | 0.92 | 0.92 | 3875 |
| 3 | 0.89 | 0.9 | 0.9 | 2105 |
| 4 | 0.79 | 0.89 | 0.84 | 2065 |
| 5 | 0.91 | 0.91 | 0.91 | 2521 |
| 6 | 0.88 | 0.85 | 0.87 | 2001 |
| 7 | 0.9 | 0.91 | 0.91 | 1573 |
| 8 | 0.89 | 0.86 | 0.88 | 683 |
| 9 | 0.91 | 0.9 | 0.9 | 3349 |
| 10 | 0.9 | 0.88 | 0.89 | 4573 |
| 11 | 0.87 | 0.86 | 0.87 | 2279 |
| 12 | 0.85 | 0.86 | 0.85 | 1419 |
| 13 | 0.84 | 0.86 | 0.85 | 3615 |
| 14 | 0.85 | 0.9 | 0.87 | 3759 |
| 15 | 0.92 | 0.9 | 0.91 | 1562 |
| 16 | 0.89 | 0.93 | 0.91 | 1882 |
| 17 | 0.8 | 0.86 | 0.83 | 1483 |
| 18 | 0.87 | 0.88 | 0.87 | 7194 |
| 19 | 0.88 | 0.9 | 0.89 | 3951 |
| 20 | 0.76 | 0.75 | 0.75 | 106 |
| 21 | 0.99 | 0.96 | 0.98 | 6709 |
| macro avg | 0.87 | 0.87 | 0.87 | 60677 |
| weighted avg | 0.89 | 0.89 | 0.89 | 60677 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,678 | [
[
-0.03924560546875,
-0.047271728515625,
0.0081787109375,
0.01947021484375,
-0.00592803955078125,
-0.0014781951904296875,
-0.0246734619140625,
-0.02459716796875,
0.015899658203125,
0.0247955322265625,
-0.033447265625,
-0.049346923828125,
-0.0562744140625,
0.00... |
poltextlab/xlm-roberta-large-speech-cap | 2023-07-04T17:40:27.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-speech-cap | 0 | 2 | transformers | 2023-06-05T09:13:50 |
---
---
license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-speech-cap
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `speech` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-speech-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-speech-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 92244 examples (10% of the available data).<br>
Model accuracy is **0.78**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.67 | 0.76 | 0.71 | 6352 |
| 1 | 0.7 | 0.54 | 0.61 | 2436 |
| 2 | 0.76 | 0.88 | 0.81 | 4133 |
| 3 | 0.8 | 0.7 | 0.75 | 2588 |
| 4 | 0.69 | 0.67 | 0.68 | 3080 |
| 5 | 0.82 | 0.78 | 0.8 | 3053 |
| 6 | 0.74 | 0.74 | 0.74 | 2340 |
| 7 | 0.81 | 0.75 | 0.78 | 1658 |
| 8 | 0.79 | 0.59 | 0.68 | 1759 |
| 9 | 0.71 | 0.82 | 0.76 | 3428 |
| 10 | 0.7 | 0.73 | 0.72 | 3982 |
| 11 | 0.74 | 0.65 | 0.69 | 2622 |
| 12 | 0.75 | 0.62 | 0.68 | 1914 |
| 13 | 0.67 | 0.69 | 0.68 | 3155 |
| 14 | 0.78 | 0.77 | 0.77 | 2900 |
| 15 | 0.77 | 0.64 | 0.7 | 1762 |
| 16 | 0.73 | 0.44 | 0.55 | 1320 |
| 17 | 0.74 | 0.69 | 0.71 | 4872 |
| 18 | 0.67 | 0.72 | 0.69 | 7497 |
| 19 | 0.64 | 0.66 | 0.65 | 2411 |
| 20 | 0.71 | 0.85 | 0.77 | 2764 |
| 21 | 0.94 | 0.94 | 0.94 | 26218 |
| macro avg | 0.74 | 0.71 | 0.72 | 92244 |
| weighted avg | 0.78 | 0.78 | 0.78 | 92244 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,682 | [
[
-0.0399169921875,
-0.051361083984375,
0.005340576171875,
0.0208740234375,
-0.00238037109375,
-0.0005064010620117188,
-0.0307159423828125,
-0.02203369140625,
0.0143890380859375,
0.02105712890625,
-0.036407470703125,
-0.048858642578125,
-0.05419921875,
0.00461... |
poltextlab/xlm-roberta-large-budget-cap | 2023-07-04T17:40:27.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-budget-cap | 0 | 2 | transformers | 2023-06-05T09:17:14 |
---
---
license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-budget-cap
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `budget` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-budget-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-budget-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 11802 examples (10% of the available data).<br>
Model accuracy is **0.98**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.98 | 0.97 | 0.98 | 1137 |
| 1 | 0.95 | 0.95 | 0.95 | 165 |
| 2 | 0.98 | 0.98 | 0.98 | 650 |
| 3 | 0.98 | 0.99 | 0.98 | 586 |
| 4 | 0.97 | 0.96 | 0.96 | 442 |
| 5 | 0.99 | 0.99 | 0.99 | 1621 |
| 6 | 0.98 | 0.97 | 0.98 | 195 |
| 7 | 1 | 0.98 | 0.99 | 89 |
| 8 | 0.92 | 0.94 | 0.93 | 51 |
| 9 | 0.99 | 0.99 | 0.99 | 463 |
| 10 | 0.98 | 0.98 | 0.98 | 1007 |
| 11 | 0.96 | 0.99 | 0.97 | 509 |
| 12 | 0.97 | 0.97 | 0.97 | 276 |
| 13 | 0.98 | 0.98 | 0.98 | 714 |
| 14 | 0.98 | 0.96 | 0.97 | 382 |
| 15 | 0.97 | 0.97 | 0.97 | 485 |
| 16 | 0.97 | 0.97 | 0.97 | 86 |
| 17 | 0.99 | 0.96 | 0.97 | 291 |
| 18 | 0.96 | 0.97 | 0.97 | 1342 |
| 19 | 0.98 | 0.98 | 0.98 | 673 |
| 20 | 0.98 | 0.98 | 0.98 | 304 |
| 21 | 0.93 | 0.94 | 0.93 | 334 |
| macro avg | 0.97 | 0.97 | 0.97 | 11802 |
| weighted avg | 0.98 | 0.98 | 0.98 | 11802 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,682 | [
[
-0.04327392578125,
-0.047149658203125,
0.00689697265625,
0.0213165283203125,
-0.00124359130859375,
-0.0004911422729492188,
-0.025848388671875,
-0.022735595703125,
0.01534271240234375,
0.0226593017578125,
-0.0374755859375,
-0.044281005859375,
-0.05255126953125,
... |
sailor955/distilbert-base-uncased-finetuned-clinc | 2023-06-05T10:25:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | sailor955 | null | null | sailor955/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-06-05T10:11:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6282 | 2.0 | 636 | 1.8753 | 0.8371 |
| 1.548 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.7952 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 1,889 | [
[
-0.034820556640625,
-0.041107177734375,
0.01239013671875,
0.006977081298828125,
-0.0270538330078125,
-0.024658203125,
-0.01287078857421875,
-0.0084991455078125,
0.002872467041015625,
0.0221405029296875,
-0.046722412109375,
-0.048431396484375,
-0.058074951171875,... |
dwojcik/gpt2-large-fine-tuned-context-256 | 2023-07-19T11:38:42.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | dwojcik | null | null | dwojcik/gpt2-large-fine-tuned-context-256 | 0 | 2 | transformers | 2023-06-05T11:38:35 | ---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model description
This model is a fine-tuned version of the GPT-2 Large pre-trained model, specifically adapted for text analysis on articles from the Przegląd Elektrotechniczny journal's 2023 edition. The fine-tuning process was designed to provide valuable insights into the journal's content, helping identify prevalent topics, themes, and ideas discussed in the articles.
## Model Description
This model is based on GPT-2 Large, which has 744 milion parameters and is designed for a wide range of Natural Language Processing tasks, including text generation, summarization, and classification. The base GPT-2 model has achieved state-of-the-art results on various NLP benchmarks and has become a standard in the field.
## Fine-tuning Details
- Dataset: The dataset consists of the text from all the articles published in the 2023 edition of the Przegląd Elektrotechniczny journal. The text data was cleaned and preprocessed before fine-tuning.
- Tokenizer: The Hugging Face AutoTokenizer for gpt2-large was used to tokenize the cleaned texts.
- Training: The model was trained for 10 epochs, using the Adam optimizer with a learning rate of 3e-4 and gradient accumulation steps of 8.
## Usage
This fine-tuned model can be used to generate text based on prompts related to the Przegląd Elektrotechniczny journal's content. Below is an example of how to use this model for text generation:
```python
from transformers import pipeline
generator = pipeline(
"text-generation",
model="dwojcik/gpt2-large-fine-tuned-context-256",
device=0, # for GPU, set to -1 if using CPU
)
generated_text = generator("Input your prompt here", max_length=100, num_return_sequences=1)[0]['generated_text']
print(generated_text)
```
## Limitations
Though the model has been specifically fine-tuned for text analysis and generation based on the Przegląd Elektrotechniczny journal, its accuracy and appropriateness might vary depending on the prompt and context. Additionally, it might not perform with equal efficiency for different types of prompts and text-related tasks in other domains. | 2,167 | [
[
-0.027069091796875,
-0.0823974609375,
0.04034423828125,
0.001422882080078125,
-0.03955078125,
-0.0231170654296875,
-0.026275634765625,
-0.0357666015625,
-0.01483154296875,
0.036956787109375,
-0.05780029296875,
-0.0276031494140625,
-0.051177978515625,
0.02311... |
hoangNg210/lvm-med | 2023-07-19T03:30:42.000Z | [
"transformers",
"medical",
"image-segmentation",
"en",
"arxiv:2306.11925",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | hoangNg210 | null | null | hoangNg210/lvm-med | 0 | 2 | transformers | 2023-06-05T14:20:00 | ---
license: apache-2.0
language:
- en
pipeline_tag: image-segmentation
tags:
- medical
---
# LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
**[DFKI, Research Center for Artificial Intelligence](https://www.dfki.de/web)**
Duy M. H. Nguyen
Hoang Nguyen
Nghiem T. Diep
Tan N. Pham
Tri Cao
Binh T. Nguyen
Paul Swoboda
Nhat Ho
Shadi Albarqouni
Pengtao Xie
Daniel Sonntag
Mathias Niepert
[[`Paper`](https://arxiv.org/abs/2306.11925)] [[`BibTeX`](#citing-lvm-med)]
PyTorch implementation and pretrained models for LVM-Med. For details, see the paper: **LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching**.
LVM-Med models produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning. The models were pretrained on a dataset of 142 M images without using any labels or annotations.
https://user-images.githubusercontent.com/60359573/230078733-5faffa19-e6ce-4c55-9200-62dd76f8236a.mp4
<div align="center">
Visualization of the three first principal components of the patch features of all frames, mapped to RGB values.
</div>
## Pretrained models
<table>
<tr>
<th>Arch</th>
<th>Params (M)</th>
<th> 2D Segmentation (Dice) </th>
<th> 3D Segmentation (3D IoU) </th>
<th>Weights</th>
</tr>
<tr>
<td>ResNet-50</td>
<td>25.5M</td>
<td>83.05</td>
<td>79.02</td>
<td> <a href="https://drive.google.com/file/d/11Uamq4bT_AbTf8sigIctIAnQJN4EethW/view?usp=sharing">backbone</a> </td>
</tr>
<tr>
<td>ViT-B</td>
<td>86.0M</td>
<td>85.80</td>
<td>73.85</td>
<td> <a href="https://drive.google.com/file/d/14bX8wdw-c3VUw3XPAtFMB-wFE03q0eCi/view?usp=sharing">backbone</a> </td>
</tr>
</table>
### Pretrained models via PyTorch Hub
Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install the PyTorch and torchvision dependencies (these are the only required dependencies). Installing both PyTorch and torchvision with CUDA support is strongly recommended.
The corresponding model card can be found in the [[`MODEL_CARD.md`](MODEL_CARD.md)] file.
```python
import torch
dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
```
## Installation
The training and evaluation code requires PyTorch 2.0 and xFormers 0.0.18 as well as a number of other 3rd party packages. To setup all the required dependencies for training and evaluation, please follow the instructions below:
*conda* **(Recommended)** - Create and activate a `dinov2` conda environment using the provided environment definition:
```shell
conda env create -f conda.yaml
conda activate dinov2
```
*pip* - Use the provided `requirements.txt` to install the dependencies:
```shell
pip install -r requirements.txt
```
## Data preparation
Expected contents for the ImageNet-1k data folder:
- `<root>/test/ILSVRC2012_test_00000001.JPEG`
- `<root>/test/[..]`
- `<root>/test/ILSVRC2012_test_00100000.JPEG`
- `<root>/train/n01440764/n01440764_10026.JPEG`
- `<root>/train/[...]`
- `<root>/train/n15075141/n15075141_9993.JPEG`
- `<root>/val/n01440764/ILSVRC2012_val_00000293.JPEG`
- `<root>/val/[...]`
- `<root>/val/n15075141/ILSVRC2012_val_00049174.JPEG`
- `<root>/labels.txt`
For ImageNet-22k, please adapt the Dataset object accordingly.
## Training
### Fast setup: training DINOv2 ViT-L/16 on ImageNet-1k
Run DINOv2 on 4 A100-80GB nodes (32 GPUs) in a SLURM cluster environment with submitit.
```shell
python dinov2/run/train/train.py \
--nodes 4 \
--config-file dinov2/configs/train/vitl16_short.yaml \
--output-dir <PATH/TO/OUTPUT/DIR> \
train.dataset_path=ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
Training time is approximately 1 day and the resulting checkpoint should reach 81.6% on k-NN eval and 82.9% on linear eval.
The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.
### Long setup: training DINOv2 ViT-L/14 on ImageNet-22k
Run on 12 A100-80GB nodes (96 GPUs) in a SLURM cluster environment with submitit.
```
python dinov2/run/train/train.py \
--nodes 12 \
--config-file dinov2/configs/train/vitl14.yaml \
--output-dir <PATH/TO/OUTPUT/DIR> \
train.dataset_path=ImageNet22k:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
Training time is approximately 3.3 days and the resulting checkpoint should reach 82.0% on k-NN eval and 84.5% on linear eval.
The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.
## Evaluation
The training code regularly saves the teacher weights. In order to evaluate the model, run the following evaluation on a single node:
### k-NN classification on ImageNet-1k
```
python dinov2/run/eval/knn.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/knn \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
### Logistic regression classification on ImageNet-1k
```
python dinov2/run/eval/log_regression.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/logreg \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
### Linear classification with data augmentation on ImageNet-1k
```
python dinov2/run/eval/linear.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/linear \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
We release the weights from evaluating the different models:
<table>
<tr>
<th>model</th>
<th>ImageNet<br />top-1</th>
<th>linear evaluation</th>
</tr>
<tr>
<td>ViT-S/14 distilled</td>
<td align="right">81.1%</td>
<td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear_head.pth">linear head weights</a></td>
</tr>
<tr>
<td>ViT-B/14 distilled</td>
<td align="right">84.5%</td>
<td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth">linear head weights</a></td>
</tr>
<tr>
<td>ViT-L/14 distilled</td>
<td align="right">86.3%</td>
<td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth">linear head weights</a></td>
</tr>
<tr>
<td>ViT-g/14</td>
<td align="right">86.5%</td>
<td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth">linear head weights</a></td>
</tr>
</table>
The performance of the provided pretrained model weights can be evaluated as follows on ImageNet-1k:
```
python dinov2/run/eval/linear.py \
--config-file dinov2/configs/eval/vitg14_pretrain.yaml \
--pretrained-weights https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```
## License
This repository and the models are released under the CC-BY-NC as found in the [LICENSE](LICENSE) file.
## Contributing
See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).
## Citing LVM-Med
If you find this repository useful, please consider giving a star :star: and citation :t-rex::
```
@misc{nguyen2023lvm,
title={LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching},
author={Nguyen, Duy MH and Nguyen, Hoang and Diep, Nghiem T and Pham, Tan N and Cao, Tri and Nguyen, Binh T and Swoboda, Paul and Ho, Nhat and Albarqouni, Shadi and Xie, Pengtao and others},
journal={arXiv preprint arXiv:2306.11925},
year={2023}
}
``` | 8,872 | [
[
-0.040802001953125,
-0.025390625,
0.0005650520324707031,
-0.0035228729248046875,
-0.03521728515625,
-0.02740478515625,
-0.0035572052001953125,
-0.0245819091796875,
0.01276397705078125,
0.01363372802734375,
-0.037994384765625,
-0.033843994140625,
-0.0527038574218... |
Anwaarma/fine-tuned-cola-spam | 2023-06-05T15:26:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | Anwaarma | null | null | Anwaarma/fine-tuned-cola-spam | 0 | 2 | transformers | 2023-06-05T14:36:30 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-cola-spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-cola-spam
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0516
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1474 | 0.2 | 50 | 0.0676 | 0.98 |
| 0.1008 | 0.41 | 100 | 0.0478 | 0.9933 |
| 0.098 | 0.61 | 150 | 0.0482 | 0.9867 |
| 0.0932 | 0.82 | 200 | 0.0516 | 0.9867 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,526 | [
[
-0.035552978515625,
-0.060089111328125,
-0.0011005401611328125,
0.0197906494140625,
-0.0187225341796875,
-0.01422119140625,
-0.005870819091796875,
-0.0200347900390625,
0.00687408447265625,
0.02484130859375,
-0.05499267578125,
-0.033599853515625,
-0.0587463378906... |
jackoyoungblood/distilbert-base-uncased-distilled-clinc | 2023-06-09T17:52:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jackoyoungblood | null | null | jackoyoungblood/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-06-05T16:19:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9306451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0379
- Accuracy: 0.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.82 | 1.0 | 318 | 0.4227 | 0.6671 |
| 0.3222 | 2.0 | 636 | 0.1507 | 0.8426 |
| 0.1495 | 3.0 | 954 | 0.0788 | 0.9026 |
| 0.0962 | 4.0 | 1272 | 0.0575 | 0.9135 |
| 0.0754 | 5.0 | 1590 | 0.0488 | 0.9223 |
| 0.0651 | 6.0 | 1908 | 0.0441 | 0.9284 |
| 0.0592 | 7.0 | 2226 | 0.0412 | 0.9316 |
| 0.0555 | 8.0 | 2544 | 0.0394 | 0.93 |
| 0.0534 | 9.0 | 2862 | 0.0382 | 0.9310 |
| 0.0521 | 10.0 | 3180 | 0.0379 | 0.9306 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 2,200 | [
[
-0.034881591796875,
-0.038818359375,
0.0160675048828125,
0.0068206787109375,
-0.02288818359375,
-0.0171356201171875,
-0.00748443603515625,
-0.0035228729248046875,
0.01026153564453125,
0.0223236083984375,
-0.043792724609375,
-0.04998779296875,
-0.060943603515625,... |
cardiffnlp/flan-t5-base-tweet-nerd | 2023-06-09T04:53:30.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cardiffnlp | null | null | cardiffnlp/flan-t5-base-tweet-nerd | 0 | 2 | transformers | 2023-06-05T17:34:17 |
---
widget:
- text: "context: dbh the worst game of all time its awful its terrible i hate it i never even wanted to play it i hope connor jumps off a cliff really, definition: 2018 adventure video game developed by Quantic Dream, target: dbh"
example_title: example 1
- text: "context: dbh the worst game of all time its awful its terrible i hate it i never even wanted to play it i hope connor jumps off a cliff really, definition: King of Axum, target: dbh"
example_title: example 2
- text: "context: 10’ | 0-0 | Good start, Patronato dangerous on the counter though #VamosBoca 🟦🟨🟦, definition: Argentinan Football Club, target: Patronato"
example_title: example 3
---
# cardiffnlp/flan-t5-base-tweet-nerd
This is [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) fine-tuned on [cardiffnlp/super_tweeteval (tweet_nerd)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-base-tweet-nerd")
output = pipe("context: dbh the worst game of all time its awful its terrible i hate it i never even wanted to play it i hope connor jumps off a cliff really, definition: 2018 adventure video game developed by Quantic Dream, target: dbh")
```
| 1,301 | [
[
-0.0202178955078125,
-0.0474853515625,
0.005245208740234375,
0.0171661376953125,
-0.0170440673828125,
0.0125885009765625,
0.003696441650390625,
0.003734588623046875,
0.051300048828125,
0.0262298583984375,
-0.07257080078125,
-0.05242919921875,
-0.0411376953125,
... |
osunlp/BioVocabBERT | 2023-07-04T14:26:56.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:2306.17649",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | osunlp | null | null | osunlp/BioVocabBERT | 2 | 2 | transformers | 2023-06-05T17:57:26 | This biomedical language model uses a specialized biomedical tokenizer which is more closely aligned with human-morphological judgements than previous biomedical tokenizers such as PubMedBERT.
Details about our tokenizer design, pre-training procedure and downstream results can be found in our [BioNLP @ ACL 2023 paper](http://arxiv.org/pdf/2306.17649.pdf)
---
license: apache-2.0
---
| 388 | [
[
-0.008270263671875,
-0.0517578125,
0.0340576171875,
0.0047149658203125,
-0.02850341796875,
0.01285552978515625,
-0.0028057098388671875,
-0.0660400390625,
0.0306396484375,
0.0249481201171875,
-0.031219482421875,
-0.050048828125,
-0.032501220703125,
0.02867126... |
vsugihjoso/vit-base-patch16-224-finetuned-flower | 2023-06-06T13:37:11.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | vsugihjoso | null | null | vsugihjoso/vit-base-patch16-224-finetuned-flower | 0 | 2 | transformers | 2023-06-05T18:38:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
| 1,119 | [
[
-0.0307769775390625,
-0.06048583984375,
0.007091522216796875,
0.0204315185546875,
-0.030059814453125,
-0.038909912109375,
-0.01384735107421875,
-0.0225677490234375,
0.00707244873046875,
0.0233001708984375,
-0.057830810546875,
-0.036407470703125,
-0.0434265136718... |
CeroShrijver/albert_chinese_xlarge-text-classification | 2023-06-06T02:22:23.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | CeroShrijver | null | null | CeroShrijver/albert_chinese_xlarge-text-classification | 0 | 2 | transformers | 2023-06-05T19:11:12 | ---
tags:
- generated_from_trainer
model-index:
- name: albert_chinese_xlarge-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_chinese_xlarge-text-classification
Test Accuracy: 0.8514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 929 | [
[
-0.029754638671875,
-0.03460693359375,
0.01531982421875,
0.021087646484375,
-0.0250091552734375,
-0.033905029296875,
-0.0080718994140625,
-0.0230865478515625,
0.0020694732666015625,
0.01385498046875,
-0.03173828125,
-0.0440673828125,
-0.0462646484375,
0.0091... |
charliemarx/distilbert-base-uncased-finetuned-cola | 2023-06-05T23:05:26.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | charliemarx | null | null | charliemarx/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-06-05T19:20:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5343776508344485
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8307
- Matthews Correlation: 0.5344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5252 | 1.0 | 535 | 0.5592 | 0.4011 |
| 0.3465 | 2.0 | 1070 | 0.5055 | 0.5035 |
| 0.2338 | 3.0 | 1605 | 0.5684 | 0.5324 |
| 0.1752 | 4.0 | 2140 | 0.8060 | 0.5309 |
| 0.1307 | 5.0 | 2675 | 0.8307 | 0.5344 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,036 | [
[
-0.021820068359375,
-0.048736572265625,
0.0099029541015625,
0.0186309814453125,
-0.02197265625,
-0.01020050048828125,
-0.0057830810546875,
-0.0030651092529296875,
0.022064208984375,
0.01038360595703125,
-0.0465087890625,
-0.036865234375,
-0.06268310546875,
-... |
federated/transformers-dsc-workshop | 2023-06-05T21:31:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:tiny_shakespeare",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | federated | null | null | federated/transformers-dsc-workshop | 0 | 2 | transformers | 2023-06-05T19:38:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tiny_shakespeare
model-index:
- name: transformers-dsc-workshop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transformers-dsc-workshop
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the tiny_shakespeare dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,072 | [
[
-0.045623779296875,
-0.03204345703125,
0.0273284912109375,
-0.0052032470703125,
-0.033721923828125,
-0.0236053466796875,
-0.00547027587890625,
-0.013092041015625,
0.0007061958312988281,
0.016265869140625,
-0.059173583984375,
-0.0281524658203125,
-0.06787109375,
... |
fayyadd/distilbert-base-uncased-finetuned-cola | 2023-06-06T00:22:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | fayyadd | null | null | fayyadd/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-06-05T21:48:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5517964161621091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7387
- Matthews Correlation: 0.5518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5216 | 1.0 | 535 | 0.5281 | 0.4364 |
| 0.3471 | 2.0 | 1070 | 0.5044 | 0.4969 |
| 0.2372 | 3.0 | 1605 | 0.5483 | 0.5481 |
| 0.1777 | 4.0 | 2140 | 0.7387 | 0.5518 |
| 0.1253 | 5.0 | 2675 | 0.8130 | 0.5367 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.0231170654296875,
-0.049896240234375,
0.012237548828125,
0.0189361572265625,
-0.02252197265625,
-0.00814056396484375,
-0.0055694580078125,
-0.00356292724609375,
0.0235595703125,
0.010345458984375,
-0.046142578125,
-0.03533935546875,
-0.0623779296875,
-0.0... |
cardiffnlp/flan-t5-base-tweet-intimacy | 2023-06-05T21:52:58.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cardiffnlp | null | null | cardiffnlp/flan-t5-base-tweet-intimacy | 0 | 2 | transformers | 2023-06-05T21:51:21 |
---
widget:
- text: "score intimacy: @user Wait that’s not a thing already under the Trump admin?"
example_title: example 1
- text: "score intimacy: @user 5 fucking minutes"
example_title: example 2
- text: "score intimacy: @user Furthermore, harassment is ILLEGAL in any form!"
example_title: example 3
---
# cardiffnlp/flan-t5-base-tweet-intimacy
This is [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) fine-tuned on [cardiffnlp/super_tweeteval (tweet_intimacy)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-base-tweet-intimacy")
output = pipe("score intimacy: @user Wait that’s not a thing already under the Trump admin?")
```
| 804 | [
[
-0.011932373046875,
-0.03387451171875,
0.0136566162109375,
0.03179931640625,
-0.022796630859375,
0.003864288330078125,
-0.00952911376953125,
-0.00487518310546875,
0.036834716796875,
0.0211639404296875,
-0.057159423828125,
-0.045806884765625,
-0.047698974609375,
... |
bogdancazan/pegasus_summarization_pretrained | 2023-06-05T23:44:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | bogdancazan | null | null | bogdancazan/pegasus_summarization_pretrained | 0 | 2 | transformers | 2023-06-05T23:03:44 | ---
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: pegasus_summarization_pretrained
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.3979
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus_summarization_pretrained
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9463
- Rouge1: 0.3979
- Rouge2: 0.1963
- Rougel: 0.2889
- Rougelsum: 0.2887
- Gen Len: 61.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 124 | 2.0226 | 0.3896 | 0.1882 | 0.2838 | 0.2839 | 61.5444 |
| No log | 2.0 | 248 | 1.9736 | 0.3991 | 0.1963 | 0.291 | 0.2907 | 61.9194 |
| No log | 3.0 | 372 | 1.9542 | 0.3977 | 0.196 | 0.2889 | 0.2885 | 61.9718 |
| No log | 4.0 | 496 | 1.9463 | 0.3979 | 0.1963 | 0.2889 | 0.2887 | 61.9919 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,120 | [
[
-0.03631591796875,
-0.026763916015625,
0.005489349365234375,
0.00492095947265625,
-0.0276031494140625,
-0.032379150390625,
0.0021419525146484375,
-0.02020263671875,
0.0305023193359375,
0.035614013671875,
-0.04937744140625,
-0.0435791015625,
-0.05517578125,
-... |
DunnBC22/bert-base-uncased-News_About_Gold | 2023-06-10T22:33:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | DunnBC22 | null | null | DunnBC22/bert-base-uncased-News_About_Gold | 1 | 2 | transformers | 2023-06-05T23:22:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: bert-base-uncased-News_About_Gold
results: []
language:
- en
pipeline_tag: text-classification
---
# bert-base-uncased-News_About_Gold
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased).
It achieves the following results on the evaluation set:
- Loss: 0.2995
- Accuracy: 0.9139
- Weighted f1: 0.9133
- Micro f1: 0.9139
- Macro f1: 0.8758
- Weighted recall: 0.9139
- Micro recall: 0.9139
- Macro recall: 0.8647
- Weighted precision: 0.9131
- Micro precision: 0.9139
- Macro precision: 0.8885
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20Bert-Base%20with%20W%26B.ipynb
This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold
_Input Word Length:_
/Images/Input%20Word%20Length.png)
_Class Distribution:_
/Images/Class%20Distribution.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.6809 | 1.0 | 133 | 0.3347 | 0.8974 | 0.8928 | 0.8974 | 0.8043 | 0.8974 | 0.8974 | 0.7710 | 0.8966 | 0.8974 | 0.8800 |
| 0.2778 | 2.0 | 266 | 0.2844 | 0.9144 | 0.9145 | 0.9144 | 0.8668 | 0.9144 | 0.9144 | 0.8689 | 0.9150 | 0.9144 | 0.8650 |
| 0.2022 | 3.0 | 399 | 0.2823 | 0.9172 | 0.9166 | 0.9172 | 0.8820 | 0.9172 | 0.9172 | 0.8691 | 0.9165 | 0.9172 | 0.8968 |
| 0.1496 | 4.0 | 532 | 0.2856 | 0.9167 | 0.9164 | 0.9167 | 0.8838 | 0.9167 | 0.9167 | 0.8686 | 0.9167 | 0.9167 | 0.9027 |
| 0.1196 | 5.0 | 665 | 0.2995 | 0.9139 | 0.9133 | 0.9139 | 0.8758 | 0.9139 | 0.9139 | 0.8647 | 0.9131 | 0.9139 | 0.8885 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 3,866 | [
[
-0.054595947265625,
-0.039581298828125,
0.0154266357421875,
0.0199737548828125,
-0.0115814208984375,
-0.0028514862060546875,
-0.01328277587890625,
-0.011871337890625,
0.0335693359375,
0.0257568359375,
-0.056427001953125,
-0.050445556640625,
-0.051422119140625,
... |
cafbr/minigpt | 2023-06-06T21:13:47.000Z | [
"keras",
"region:us"
] | null | cafbr | null | null | cafbr/minigpt | 0 | 2 | keras | 2023-06-05T23:47:35 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 738 | [
[
-0.038055419921875,
-0.0411376953125,
0.0288848876953125,
0.005496978759765625,
-0.033966064453125,
-0.0165863037109375,
0.0011091232299804688,
-0.0008072853088378906,
0.0232696533203125,
0.021636962890625,
-0.044891357421875,
-0.048980712890625,
-0.034118652343... |
cardiffnlp/flan-t5-base-tweet-emoji | 2023-06-08T18:54:01.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cardiffnlp | null | null | cardiffnlp/flan-t5-base-tweet-emoji | 0 | 2 | transformers | 2023-06-06T02:57:07 |
---
widget:
- text: "Louis Loudlinson is back with 16 and 18 secs "
example_title: example 1
- text: "I’m tired of being sick.. it’s been four days dawg "
example_title: example 2
- text: "Here is your daily reminder that ADAM COLE IS ONE OF THE BEST WRESTLERS ON THE PLANET. Saw some unintelligent ghouls on here laugh at how he was NXT Champion, and now getting “kissed by the Bucks” Because he’d be better off on Monday or Friday, right? Patience. Seek it "
example_title: example 3
---
# cardiffnlp/flan-t5-base-tweet-emoji
This is [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) fine-tuned on [cardiffnlp/super_tweeteval (tweet_emoji)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-base-tweet-emoji")
output = pipe("Louis Loudlinson is back with 16 and 18 secs ")
```
| 946 | [
[
-0.00815582275390625,
-0.026336669921875,
0.006587982177734375,
0.04522705078125,
-0.022552490234375,
0.01557159423828125,
-0.021759033203125,
0.001251220703125,
0.051361083984375,
0.0093231201171875,
-0.06243896484375,
-0.031890869140625,
-0.04302978515625,
... |
DunnBC22/bort-News_About_Gold | 2023-06-10T22:31:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"endpoints_compatible",
"region:us"
] | text-classification | DunnBC22 | null | null | DunnBC22/bort-News_About_Gold | 1 | 2 | transformers | 2023-06-06T03:05:02 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: bort-News_About_Gold
results: []
language:
- en
pipeline_tag: text-classification
---
# bort-News_About_Gold
This model is a fine-tuned version of [amazon/bort](https://huggingface.co/amazon/bort).
It achieves the following results on the evaluation set:
- Loss: 0.3791
- Accuracy: 0.8770
- Weighted f1: 0.8743
- Micro f1: 0.8770
- Macro f1: 0.7791
- Weighted recall: 0.8770
- Micro recall: 0.8770
- Macro recall: 0.7539
- Weighted precision: 0.8778
- Micro precision: 0.8770
- Macro precision: 0.8463
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20BORT%20with%20W%26B.ipynb
This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold
_Input Word Length:_
/Images/Input%20Word%20Length.png)
_Class Distribution:_
/Images/Class%20Distribution.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 1.0437 | 1.0 | 133 | 0.8379 | 0.6954 | 0.6800 | 0.6954 | 0.5285 | 0.6954 | 0.6954 | 0.5326 | 0.6944 | 0.6954 | 0.5434 |
| 0.6297 | 2.0 | 266 | 0.4715 | 0.8340 | 0.8209 | 0.8340 | 0.6267 | 0.8340 | 0.8340 | 0.6368 | 0.8111 | 0.8340 | 0.6187 |
| 0.4216 | 3.0 | 399 | 0.3984 | 0.8661 | 0.8616 | 0.8661 | 0.7464 | 0.8661 | 0.8661 | 0.7231 | 0.8698 | 0.8661 | 0.8597 |
| 0.3339 | 4.0 | 532 | 0.3808 | 0.8765 | 0.8748 | 0.8765 | 0.7825 | 0.8765 | 0.8765 | 0.7628 | 0.8774 | 0.8765 | 0.8304 |
| 0.2869 | 5.0 | 665 | 0.3791 | 0.8770 | 0.8743 | 0.8770 | 0.7791 | 0.8770 | 0.8770 | 0.7539 | 0.8778 | 0.8770 | 0.8463 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 3,804 | [
[
-0.05340576171875,
-0.043487548828125,
0.01275634765625,
0.0153961181640625,
-0.009368896484375,
-0.0004935264587402344,
-0.0109100341796875,
-0.01474761962890625,
0.0285797119140625,
0.020843505859375,
-0.046630859375,
-0.051666259765625,
-0.05023193359375,
... |
mujadid-syahbana/model-baru-collab-4 | 2023-06-06T05:20:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | mujadid-syahbana | null | null | mujadid-syahbana/model-baru-collab-4 | 0 | 2 | transformers | 2023-06-06T03:48:52 | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: model-baru-collab-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-baru-collab-4
This model is a fine-tuned version of [zarko1231/model-baru-collab-3](https://huggingface.co/zarko1231/model-baru-collab-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
- Wer: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 50
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.9939 | 1.0 | 26 | 3.3198 | 1.0039 |
| 3.019 | 2.0 | 52 | 2.7545 | 1.0039 |
| 2.4572 | 3.0 | 78 | 1.7117 | 1.0587 |
| 1.3597 | 4.0 | 104 | 0.5751 | 1.0185 |
| 0.6395 | 5.0 | 130 | 0.2423 | 0.7125 |
| 0.3831 | 6.0 | 156 | 0.1320 | 0.5572 |
| 0.3083 | 7.0 | 182 | 0.0855 | 0.4575 |
| 0.2039 | 8.0 | 208 | 0.0603 | 0.3941 |
| 0.1589 | 9.0 | 234 | 0.0487 | 0.3787 |
| 0.2031 | 10.0 | 260 | 0.0442 | 0.3794 |
| 0.1396 | 11.0 | 286 | 0.0406 | 0.3655 |
| 0.1612 | 12.0 | 312 | 0.0390 | 0.3532 |
| 0.1285 | 13.0 | 338 | 0.0373 | 0.3509 |
| 0.1355 | 14.0 | 364 | 0.0356 | 0.3532 |
| 0.1527 | 15.0 | 390 | 0.0352 | 0.3478 |
| 0.1207 | 16.0 | 416 | 0.0339 | 0.3470 |
| 0.1106 | 17.0 | 442 | 0.0332 | 0.3462 |
| 0.1243 | 18.0 | 468 | 0.0332 | 0.3447 |
| 0.1107 | 19.0 | 494 | 0.0328 | 0.3447 |
| 0.0986 | 20.0 | 520 | 0.0326 | 0.3423 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,561 | [
[
-0.050537109375,
-0.039947509765625,
0.007595062255859375,
0.003082275390625,
-0.0134735107421875,
-0.01209259033203125,
0.0020923614501953125,
-0.0084075927734375,
0.0263824462890625,
0.03106689453125,
-0.054290771484375,
-0.049713134765625,
-0.04132080078125,
... |
gokuls/hBERTv2_new_pretrain_w_init__cola | 2023-06-06T06:32:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv2_new_pretrain_w_init__cola | 0 | 2 | transformers | 2023-06-06T06:22:08 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init__cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init__cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6176
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6224 | 1.0 | 67 | 0.6209 | 0.0 | 0.6913 |
| 0.6167 | 2.0 | 134 | 0.6176 | 0.0 | 0.6913 |
| 0.6138 | 3.0 | 201 | 0.6234 | 0.0 | 0.6913 |
| 0.6162 | 4.0 | 268 | 0.6188 | 0.0 | 0.6913 |
| 0.6108 | 5.0 | 335 | 0.6202 | 0.0 | 0.6913 |
| 0.6114 | 6.0 | 402 | 0.6180 | 0.0 | 0.6913 |
| 0.6117 | 7.0 | 469 | 0.6192 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,480 | [
[
-0.02545166015625,
-0.041656494140625,
0.005283355712890625,
0.0168609619140625,
-0.0177154541015625,
-0.01153564453125,
0.00015628337860107422,
-0.0175018310546875,
0.0235137939453125,
0.0169525146484375,
-0.052459716796875,
-0.02984619140625,
-0.05368041992187... |
gokuls/hBERTv2_new_pretrain_48_cola | 2023-06-06T06:40:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv2_new_pretrain_48_cola | 0 | 2 | transformers | 2023-06-06T06:28:27 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: hBERTv2_new_pretrain_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6174
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6319 | 1.0 | 67 | 0.6185 | 0.0 |
| 0.6208 | 2.0 | 134 | 0.6358 | 0.0 |
| 0.6124 | 3.0 | 201 | 0.6174 | 0.0 |
| 0.6048 | 4.0 | 268 | 0.6443 | 0.0 |
| 0.5847 | 5.0 | 335 | 0.6521 | -0.0207 |
| 0.5739 | 6.0 | 402 | 0.6336 | 0.0667 |
| 0.5562 | 7.0 | 469 | 0.6433 | -0.0110 |
| 0.54 | 8.0 | 536 | 0.6524 | 0.0910 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,331 | [
[
-0.0240325927734375,
-0.04486083984375,
0.004245758056640625,
0.01788330078125,
-0.01788330078125,
-0.01019287109375,
-0.0003390312194824219,
-0.0157623291015625,
0.0271148681640625,
0.0201568603515625,
-0.053009033203125,
-0.030914306640625,
-0.051605224609375,... |
gokuls/hBERTv1_new_pretrain_48_cola | 2023-06-06T06:41:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv1_new_pretrain_48_cola | 0 | 2 | transformers | 2023-06-06T06:28:33 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6153
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6296 | 1.0 | 67 | 0.6343 | 0.0 | 0.6913 |
| 0.6166 | 2.0 | 134 | 0.6423 | 0.0 | 0.6913 |
| 0.611 | 3.0 | 201 | 0.6202 | 0.0 | 0.6913 |
| 0.6085 | 4.0 | 268 | 0.6188 | 0.0 | 0.6913 |
| 0.6107 | 5.0 | 335 | 0.6153 | 0.0 | 0.6913 |
| 0.5988 | 6.0 | 402 | 0.6260 | 0.0 | 0.6913 |
| 0.5826 | 7.0 | 469 | 0.6354 | 0.0 | 0.6913 |
| 0.5536 | 8.0 | 536 | 0.6534 | 0.0304 | 0.6539 |
| 0.5402 | 9.0 | 603 | 0.6798 | 0.0917 | 0.5954 |
| 0.5341 | 10.0 | 670 | 0.7670 | 0.0513 | 0.6836 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,715 | [
[
-0.0312042236328125,
-0.044097900390625,
0.0030155181884765625,
0.01397705078125,
-0.01446533203125,
-0.0086669921875,
0.0007452964782714844,
-0.013214111328125,
0.033538818359375,
0.0192413330078125,
-0.05609130859375,
-0.038116455078125,
-0.050537109375,
-... |
gokuls/hBERTv1_new_pretrain_w_init_48_cola | 2023-06-06T06:49:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv1_new_pretrain_w_init_48_cola | 0 | 2 | transformers | 2023-06-06T06:36:57 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6224 | 1.0 | 67 | 0.6200 | 0.0 | 0.6913 |
| 0.6183 | 2.0 | 134 | 0.6233 | 0.0 | 0.6913 |
| 0.6148 | 3.0 | 201 | 0.6241 | 0.0 | 0.6913 |
| 0.6146 | 4.0 | 268 | 0.6185 | 0.0 | 0.6913 |
| 0.6097 | 5.0 | 335 | 0.6187 | 0.0 | 0.6913 |
| 0.6094 | 6.0 | 402 | 0.6209 | 0.0 | 0.6913 |
| 0.6102 | 7.0 | 469 | 0.6328 | 0.0 | 0.6913 |
| 0.5814 | 8.0 | 536 | 0.6735 | 0.0 | 0.6913 |
| 0.5799 | 9.0 | 603 | 0.6648 | -0.0022 | 0.6788 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,660 | [
[
-0.03118896484375,
-0.042022705078125,
0.0041656494140625,
0.014373779296875,
-0.0162811279296875,
-0.01061248779296875,
0.001102447509765625,
-0.013824462890625,
0.029815673828125,
0.0178985595703125,
-0.055694580078125,
-0.03558349609375,
-0.052093505859375,
... |
gokuls/hBERTv2_new_pretrain_w_init_48_cola | 2023-06-06T06:51:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv2_new_pretrain_w_init_48_cola | 0 | 2 | transformers | 2023-06-06T06:39:56 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.08208497144404353
- name: Accuracy
type: accuracy
value: 0.6836050152778625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6191
- Matthews Correlation: 0.0821
- Accuracy: 0.6836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6301 | 1.0 | 67 | 0.6293 | 0.0 | 0.6913 |
| 0.6238 | 2.0 | 134 | 0.6254 | 0.0 | 0.6913 |
| 0.6072 | 3.0 | 201 | 0.6271 | 0.0339 | 0.6759 |
| 0.5821 | 4.0 | 268 | 0.6191 | 0.0821 | 0.6836 |
| 0.5262 | 5.0 | 335 | 0.7057 | 0.1151 | 0.6510 |
| 0.4735 | 6.0 | 402 | 0.6756 | 0.1181 | 0.6577 |
| 0.4127 | 7.0 | 469 | 0.8493 | 0.1229 | 0.6711 |
| 0.349 | 8.0 | 536 | 0.8919 | 0.1434 | 0.6232 |
| 0.311 | 9.0 | 603 | 0.9018 | 0.1398 | 0.6769 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,679 | [
[
-0.0286712646484375,
-0.041748046875,
0.00521087646484375,
0.013763427734375,
-0.015533447265625,
-0.0099029541015625,
0.00020205974578857422,
-0.016357421875,
0.0264129638671875,
0.0177154541015625,
-0.051116943359375,
-0.03179931640625,
-0.05133056640625,
... |
ctojang/distilbert-base-uncased-finetuned-clinc | 2023-06-06T10:23:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ctojang | null | null | ctojang/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-06-06T08:53:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9193548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7746
- Accuracy: 0.9194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2779 | 0.7394 |
| 3.7796 | 2.0 | 636 | 1.8682 | 0.8326 |
| 3.7796 | 3.0 | 954 | 1.1545 | 0.8952 |
| 1.6829 | 4.0 | 1272 | 0.8570 | 0.9097 |
| 0.899 | 5.0 | 1590 | 0.7746 | 0.9194 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,932 | [
[
-0.034454345703125,
-0.041229248046875,
0.012786865234375,
0.006084442138671875,
-0.0265045166015625,
-0.0254364013671875,
-0.01229095458984375,
-0.0102386474609375,
0.0026378631591796875,
0.022491455078125,
-0.046234130859375,
-0.047607421875,
-0.05844116210937... |
tillschwoerer/bert-base-uncased-finetuned-toxic-comments-detection | 2023-06-12T18:55:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tillschwoerer | null | null | tillschwoerer/bert-base-uncased-finetuned-toxic-comments-detection | 0 | 2 | transformers | 2023-06-06T09:42:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-finetuned-toxic-comments-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-toxic-comments-detection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Accuracy: 0.945
- Precision: 0.7273
- Recall: 0.7619
- F1: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
| 1,267 | [
[
-0.031494140625,
-0.040130615234375,
0.0263214111328125,
0.0101318359375,
-0.03521728515625,
-0.0316162109375,
-0.0187225341796875,
-0.021575927734375,
0.00681304931640625,
0.0304412841796875,
-0.044586181640625,
-0.04901123046875,
-0.052001953125,
-0.013320... |
ctojang/distilbert-base-uncased-distilled-clinc | 2023-06-06T10:31:23.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ctojang | null | null | ctojang/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-06-06T10:23:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,113 | [
[
-0.03253173828125,
-0.046875,
0.017669677734375,
0.00876617431640625,
-0.037933349609375,
-0.0178375244140625,
-0.0131988525390625,
-0.00887298583984375,
0.007160186767578125,
0.0274200439453125,
-0.03851318359375,
-0.042236328125,
-0.061370849609375,
-0.000... |
Anwaarma/my-awesome-spammodel2 | 2023-06-06T10:50:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | Anwaarma | null | null | Anwaarma/my-awesome-spammodel2 | 0 | 2 | transformers | 2023-06-06T10:43:02 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my-awesome-spammodel2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-spammodel2
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1559
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1808 | 0.2 | 50 | 0.1473 | 0.96 |
| 0.0845 | 0.41 | 100 | 0.1349 | 0.9733 |
| 0.087 | 0.61 | 150 | 0.1499 | 0.9733 |
| 0.0824 | 0.82 | 200 | 0.1372 | 0.9733 |
| 0.0882 | 1.02 | 250 | 0.1223 | 0.9733 |
| 0.0664 | 1.23 | 300 | 0.1434 | 0.9733 |
| 0.0152 | 1.43 | 350 | 0.1569 | 0.9733 |
| 0.0668 | 1.64 | 400 | 0.1531 | 0.9733 |
| 0.0251 | 1.84 | 450 | 0.1559 | 0.9667 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,838 | [
[
-0.04534912109375,
-0.04498291015625,
0.00665283203125,
0.007617950439453125,
-0.0159759521484375,
-0.029266357421875,
-0.0001825094223022461,
-0.0191650390625,
0.007465362548828125,
0.0279388427734375,
-0.053619384765625,
-0.0438232421875,
-0.05316162109375,
... |
PhysHunter/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-06T10:52:38.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | PhysHunter | null | null | PhysHunter/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-06T10:52:05 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 387.00 +/- 119.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PhysHunter -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PhysHunter -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PhysHunter
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 30000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0005),
('learning_starts', 30000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,763 | [
[
-0.035552978515625,
-0.035736083984375,
0.0223541259765625,
0.0191192626953125,
-0.013916015625,
-0.01406097412109375,
0.01611328125,
-0.01509857177734375,
0.01503753662109375,
0.0207061767578125,
-0.068603515625,
-0.036285400390625,
-0.025115966796875,
-0.0... |
casals90/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-06T14:09:52.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | casals90 | null | null | casals90/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-06T14:09:22 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga casals90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga casals90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga casals90
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,757 | [
[
-0.042877197265625,
-0.03924560546875,
0.0194091796875,
0.0251617431640625,
-0.01067352294921875,
-0.017181396484375,
0.01068878173828125,
-0.01277923583984375,
0.01226043701171875,
0.0224761962890625,
-0.072265625,
-0.0357666015625,
-0.024993896484375,
-0.0... |
kforkai/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-06T15:24:31.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kforkai | null | null | kforkai/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-06T14:43:05 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 352.00 +/- 98.65
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kforkai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kforkai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kforkai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
'stable_baselines3.common.atari_wrappers.AtariWrapper'),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,750 | [
[
-0.0438232421875,
-0.03948974609375,
0.0200042724609375,
0.025115966796875,
-0.01168060302734375,
-0.0176239013671875,
0.0084228515625,
-0.0137481689453125,
0.0133056640625,
0.0227813720703125,
-0.0728759765625,
-0.03509521484375,
-0.02490234375,
-0.00342750... |
NorahAlshahrani/biLSTMmsda | 2023-06-08T02:12:59.000Z | [
"keras",
"region:us"
] | null | NorahAlshahrani | null | null | NorahAlshahrani/biLSTMmsda | 0 | 2 | keras | 2023-06-06T14:57:07 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 9.999999747378752e-05 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 658 | [
[
-0.035003662109375,
-0.03900146484375,
0.026519775390625,
0.007694244384765625,
-0.041839599609375,
-0.01904296875,
0.01061248779296875,
-0.00994110107421875,
0.0153350830078125,
0.030426025390625,
-0.034210205078125,
-0.056915283203125,
-0.043212890625,
-0.... |
Andysoeasy/digit_defines | 2023-06-07T14:18:34.000Z | [
"keras",
"has_space",
"region:us"
] | null | Andysoeasy | null | null | Andysoeasy/digit_defines | 0 | 2 | keras | 2023-06-06T15:52:45 | ---
library_name: keras
---
# Модель для распознавания цифр
Натренирована на наборе данных Mnist

| 115 | [
[
-0.00514984130859375,
-0.05731201171875,
0.01556396484375,
0.0010480880737304688,
-0.0540771484375,
0.038726806640625,
0.029449462890625,
0.0203399658203125,
0.06146240234375,
0.029937744140625,
-0.0300750732421875,
-0.0426025390625,
-0.039031982421875,
-0.0... |
Bananaws/digits | 2023-06-06T17:24:18.000Z | [
"keras",
"region:us"
] | null | Bananaws | null | null | Bananaws/digits | 0 | 2 | keras | 2023-06-06T15:52:46 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерована на наборе данных mnist
Скопировано у пользователя vladimirchabanov
 | 157 | [
[
-0.01165771484375,
-0.053924560546875,
0.013916015625,
-0.005130767822265625,
-0.057891845703125,
0.04986572265625,
0.0295562744140625,
0.0117034912109375,
0.0716552734375,
0.0256195068359375,
-0.0330810546875,
-0.0457763671875,
-0.046173095703125,
-0.010459... |
Jabka/digits | 2023-06-06T16:17:45.000Z | [
"keras",
"ru",
"region:us"
] | null | Jabka | null | null | Jabka/digits | 0 | 2 | keras | 2023-06-06T15:53:06 | ---
language:
- ru
library_name: keras
---
#_Моя_модель_для_распознования_цифр
Обучена на наборе данных mnist | 110 | [
[
-0.00791168212890625,
-0.037139892578125,
0.027099609375,
0.01373291015625,
-0.04486083984375,
0.033111572265625,
0.033233642578125,
0.0172576904296875,
0.06390380859375,
0.01470947265625,
-0.052093505859375,
-0.045013427734375,
-0.0487060546875,
-0.00312614... |
sundragon/digits | 2023-06-11T21:47:29.000Z | [
"keras",
"region:us"
] | null | sundragon | null | null | sundragon/digits | 0 | 2 | keras | 2023-06-06T15:53:09 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 114 | [
[
-0.0094757080078125,
-0.05584716796875,
0.01277923583984375,
0.004627227783203125,
-0.06109619140625,
0.0374755859375,
0.0279083251953125,
0.01415252685546875,
0.06463623046875,
0.0219573974609375,
-0.0267333984375,
-0.05322265625,
-0.049285888671875,
-0.007... |
Elvis120/digits | 2023-06-06T17:29:56.000Z | [
"keras",
"region:us"
] | null | Elvis120 | null | null | Elvis120/digits | 0 | 2 | keras | 2023-06-06T15:53:16 | ---
library_name: keras
---
# Моя модель для распознавания цифр
Натренирована на наборе данных mnist
 | 115 | [
[
-0.013641357421875,
-0.05694580078125,
0.01335906982421875,
0.006359100341796875,
-0.05706787109375,
0.038116455078125,
0.0234832763671875,
0.0168304443359375,
0.0670166015625,
0.0275421142578125,
-0.0280609130859375,
-0.047576904296875,
-0.047088623046875,
... |
Hiricus/digits | 2023-06-09T08:38:22.000Z | [
"keras",
"region:us"
] | null | Hiricus | null | null | Hiricus/digits | 0 | 2 | keras | 2023-06-06T15:53:27 | ---
library_name: keras
---
# Модель для распознования цифр, натренированна на наборе данных mnist
 | 122 | [
[
-0.0043792724609375,
-0.058074951171875,
0.0163726806640625,
0.005916595458984375,
-0.05560302734375,
0.03607177734375,
0.026519775390625,
0.01512908935546875,
0.0721435546875,
0.0240325927734375,
-0.03253173828125,
-0.04803466796875,
-0.051727294921875,
-0.... |
Timiry/digits | 2023-06-07T10:50:22.000Z | [
"keras",
"region:us"
] | null | Timiry | null | null | Timiry/digits | 0 | 2 | keras | 2023-06-06T15:53:31 | ---
library_name: keras
---
# Digits detecting model
Was trained on mnist dataset
 | 105 | [
[
-0.020233154296875,
-0.0301971435546875,
0.01544952392578125,
0.01727294921875,
-0.051971435546875,
-0.0163421630859375,
0.019561767578125,
0.033355712890625,
0.034454345703125,
0.05322265625,
-0.019500732421875,
-0.0521240234375,
-0.042724609375,
-0.0144042... |
Nika7664/digits | 2023-06-09T18:19:52.000Z | [
"keras",
"region:us"
] | null | Nika7664 | null | null | Nika7664/digits | 0 | 2 | keras | 2023-06-06T15:56:38 | ---
library_name: keras
---
# Модель для распознавания цифр
Натренирована на наборе данных mnist
 | 112 | [
[
-0.01277923583984375,
-0.05828857421875,
0.0110931396484375,
0.004222869873046875,
-0.058197021484375,
0.03973388671875,
0.022369384765625,
0.017547607421875,
0.06829833984375,
0.024871826171875,
-0.0284423828125,
-0.04669189453125,
-0.044342041015625,
-0.00... |
AnastasiaAv/digits | 2023-06-07T17:54:28.000Z | [
"keras",
"region:us"
] | null | AnastasiaAv | null | null | AnastasiaAv/digits | 0 | 2 | keras | 2023-06-06T16:01:02 | ---
library_name: keras
---
# Моя модель для распознования цифр
Натренирована на наборе данных mnist
 | 114 | [
[
-0.00490570068359375,
-0.05426025390625,
0.0167236328125,
0.007537841796875,
-0.053955078125,
0.042449951171875,
0.02655029296875,
0.016571044921875,
0.06695556640625,
0.028076171875,
-0.02655029296875,
-0.04815673828125,
-0.044342041015625,
-0.0063896179199... |
Rage4/PythLab3 | 2023-06-09T17:38:47.000Z | [
"keras",
"region:us"
] | null | Rage4 | null | null | Rage4/PythLab3 | 0 | 2 | keras | 2023-06-06T16:04:59 | ---
library_name: keras
---
# Модель для распознования цифр, натренерованна на наборе данных mnist
 | 122 | [
[
-0.00437164306640625,
-0.058319091796875,
0.0165863037109375,
0.00592803955078125,
-0.0555419921875,
0.03607177734375,
0.0265960693359375,
0.01479339599609375,
0.0718994140625,
0.0240020751953125,
-0.032562255859375,
-0.048065185546875,
-0.05169677734375,
-0... |
Igor2004/digits | 2023-06-06T17:31:32.000Z | [
"keras",
"region:us"
] | null | Igor2004 | null | null | Igor2004/digits | 0 | 2 | keras | 2023-06-06T16:06:30 | ---
library_name: keras
---
# Модель для распознавания цифр
Натренирована наборе данных mnist
 | 109 | [
[
-0.0126495361328125,
-0.058258056640625,
0.009979248046875,
0.00440216064453125,
-0.059356689453125,
0.039764404296875,
0.021820068359375,
0.0190277099609375,
0.0675048828125,
0.0233612060546875,
-0.0289459228515625,
-0.0477294921875,
-0.046051025390625,
-0.... |
ArturStepanenko/digits | 2023-06-06T17:32:36.000Z | [
"keras",
"region:us"
] | null | ArturStepanenko | null | null | ArturStepanenko/digits | 0 | 2 | keras | 2023-06-06T16:37:25 | ---
library_name: keras
---
# Модель для распознования цифр
Натренерованина на наборе данных mnist
2 + 2 = 4
3 + 3 = 6
print("Hello World!")
 | 158 | [
[
-0.00798797607421875,
-0.048675537109375,
0.01401519775390625,
0.0204925537109375,
-0.065185546875,
0.0233612060546875,
0.045196533203125,
-0.00797271728515625,
0.0413818359375,
0.0255889892578125,
-0.03350830078125,
-0.035888671875,
-0.046051025390625,
-0.0... |
SaiderNN/digits | 2023-06-09T09:27:39.000Z | [
"keras",
"region:us"
] | null | SaiderNN | null | null | SaiderNN/digits | 0 | 2 | keras | 2023-06-06T16:46:32 | ---
library_name: keras
---
# Модель для распознавания цифр
Натренирована на наборе данных mnist
 | 112 | [
[
-0.00514984130859375,
-0.05731201171875,
0.01556396484375,
0.0010480880737304688,
-0.0540771484375,
0.038726806640625,
0.029449462890625,
0.0203399658203125,
0.06146240234375,
0.029937744140625,
-0.0300750732421875,
-0.0426025390625,
-0.039031982421875,
-0.0... |
NorahAlshahrani/2dCNNmsda | 2023-06-08T02:12:22.000Z | [
"keras",
"region:us"
] | null | NorahAlshahrani | null | null | NorahAlshahrani/2dCNNmsda | 0 | 2 | keras | 2023-06-06T16:47:39 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 658 | [
[
-0.034637451171875,
-0.0401611328125,
0.0255584716796875,
0.00649261474609375,
-0.041046142578125,
-0.0197601318359375,
0.01187896728515625,
-0.0110015869140625,
0.0156707763671875,
0.033538818359375,
-0.035552978515625,
-0.053741455078125,
-0.0428466796875,
... |
sofia-todeschini/BioELECTRA-base-LitCovid-v1.0 | 2023-06-06T18:45:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | sofia-todeschini | null | null | sofia-todeschini/BioELECTRA-base-LitCovid-v1.0 | 0 | 2 | transformers | 2023-06-06T17:54:51 | ---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: BioELECTRA-base-LitCovid-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioELECTRA-base-LitCovid-v1.0
This model is a fine-tuned version of [kamalkraj/bioelectra-base-discriminator-pubmed](https://huggingface.co/kamalkraj/bioelectra-base-discriminator-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1116
- F1: 0.8934
- Roc Auc: 0.9293
- Accuracy: 0.7876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1207 | 1.0 | 3120 | 0.1116 | 0.8934 | 0.9293 | 0.7876 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,481 | [
[
-0.0207366943359375,
-0.041839599609375,
0.0169677734375,
-0.007568359375,
-0.033203125,
-0.0091400146484375,
0.0015621185302734375,
-0.019866943359375,
0.01178741455078125,
0.018646240234375,
-0.042633056640625,
-0.042236328125,
-0.034881591796875,
-0.00740... |
Andrey13rasfasf/digits | 2023-06-06T18:37:54.000Z | [
"keras",
"region:us"
] | null | Andrey13rasfasf | null | null | Andrey13rasfasf/digits | 0 | 2 | keras | 2023-06-06T18:34:41 | ---
library_name: keras
---
# Моя модель для распознавания цифр
Натренирована на наборе данных mnist
 | 115 | [
[
-0.013641357421875,
-0.05694580078125,
0.01336669921875,
0.00638580322265625,
-0.05706787109375,
0.038116455078125,
0.023529052734375,
0.0168304443359375,
0.0670166015625,
0.02752685546875,
-0.0280303955078125,
-0.04754638671875,
-0.047088623046875,
-0.00650... |
cardiffnlp/flan-t5-base-tweet-ner7 | 2023-06-06T20:40:35.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | cardiffnlp | null | null | cardiffnlp/flan-t5-base-tweet-ner7 | 1 | 2 | transformers | 2023-06-06T20:39:03 |
---
widget:
- text: "Morning 5km run with @user for breast cancer awareness # pinkoctober # breastcancerawareness # zalorafit # zalorafitxbnwrc @ The Central Park , Desa Parkcity {URL}"
example_title: example 1
- text: "President Trump Arrives at UFC 244 in New York to Cheers from Crowd ( VIDEO ) {URL} via @user"
example_title: example 2
- text: "Mayo Breast Cancer Vaccine Could Be Available In Less Than A Decade via @Forbes {URL}"
example_title: example 3
---
# cardiffnlp/flan-t5-base-tweet-ner7
This is [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) fine-tuned on [cardiffnlp/super_tweeteval (tweet_ner7)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-base-tweet-ner7")
output = pipe("Morning 5km run with @user for breast cancer awareness # pinkoctober # breastcancerawareness # zalorafit # zalorafitxbnwrc @ The Central Park , Desa Parkcity {URL}")
```
| 1,036 | [
[
-0.0007157325744628906,
-0.01461029052734375,
0.0058441162109375,
0.018218994140625,
-0.02642822265625,
-0.00035119056701660156,
0.00897216796875,
-0.00768280029296875,
0.03875732421875,
0.019256591796875,
-0.06610107421875,
-0.045623779296875,
-0.05111694335937... |
shahafw/SpaceInvadersNoFrameskip-v4 | 2023-06-07T20:11:48.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | shahafw | null | null | shahafw/SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-06T22:03:42 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.50 +/- 113.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shahafw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shahafw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shahafw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,757 | [
[
-0.043670654296875,
-0.03973388671875,
0.01934814453125,
0.025390625,
-0.0121917724609375,
-0.017425537109375,
0.01009368896484375,
-0.0125885009765625,
0.01255035400390625,
0.02239990234375,
-0.0714111328125,
-0.03515625,
-0.0255126953125,
-0.00392913818359... |
afsuarezg/legalbert-large-1.7M-2_class_actions | 2023-06-07T02:32:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | afsuarezg | null | null | afsuarezg/legalbert-large-1.7M-2_class_actions | 0 | 2 | transformers | 2023-06-07T01:36:50 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: legalbert-large-1.7M-2_class_actions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalbert-large-1.7M-2_class_actions
This model is a fine-tuned version of [pile-of-law/legalbert-large-1.7M-2](https://huggingface.co/pile-of-law/legalbert-large-1.7M-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6428
- Accuracy: 0.61
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.6380 | 0.6333 |
| No log | 2.0 | 300 | 0.7457 | 0.55 |
| No log | 3.0 | 450 | 0.7066 | 0.45 |
| 0.6843 | 4.0 | 600 | 0.7218 | 0.6767 |
| 0.6843 | 5.0 | 750 | 0.6360 | 0.6067 |
| 0.6843 | 6.0 | 900 | 0.6502 | 0.6033 |
| 0.6751 | 7.0 | 1050 | 0.6664 | 0.6033 |
| 0.6751 | 8.0 | 1200 | 0.6490 | 0.6133 |
| 0.6751 | 9.0 | 1350 | 0.6506 | 0.6067 |
| 0.6781 | 10.0 | 1500 | 0.6486 | 0.61 |
| 0.6781 | 11.0 | 1650 | 0.6544 | 0.6167 |
| 0.6781 | 12.0 | 1800 | 0.6425 | 0.61 |
| 0.6781 | 13.0 | 1950 | 0.6417 | 0.61 |
| 0.6756 | 14.0 | 2100 | 0.6428 | 0.61 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,183 | [
[
-0.033599853515625,
-0.037872314453125,
0.01244354248046875,
-0.0020542144775390625,
-0.0164794921875,
-0.0203399658203125,
0.002704620361328125,
-0.007358551025390625,
0.02227783203125,
0.03729248046875,
-0.037109375,
-0.052337646484375,
-0.055084228515625,
... |
JackBAI/query_decision_train_on_maybe_train | 2023-06-07T04:40:55.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | JackBAI | null | null | JackBAI/query_decision_train_on_maybe_train | 1 | 2 | transformers | 2023-06-07T04:33:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: train_on_maybe_valid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_on_maybe_valid
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0145
- Rouge1: 98.5487
- Rouge2: 92.7434
- Rougel: 98.5481
- Rougelsum: 98.5497
- Gen Len: 14.3743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.2.1
- Tokenizers 0.13.3
| 1,218 | [
[
-0.0259246826171875,
-0.036468505859375,
0.0167694091796875,
0.00775146484375,
-0.021697998046875,
-0.033355712890625,
-0.0180816650390625,
-0.0191650390625,
0.0101776123046875,
0.0181427001953125,
-0.048675537109375,
-0.03936767578125,
-0.052490234375,
-0.0... |
michaelfeil/ct2fast-falcon-40b-instruct | 2023-06-26T12:30:59.000Z | [
"transformers",
"ctranslate2",
"int8",
"float16",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"region:us"
] | null | michaelfeil | null | null | michaelfeil/ct2fast-falcon-40b-instruct | 2 | 2 | transformers | 2023-06-07T07:58:07 | ---
tags:
- ctranslate2
- int8
- float16
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)
```bash
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.16.0
```
Converted on 2023-06-15 using
```
ct2-transformers-converter --model tiiuae/falcon-40b-instruct --output_dir /home/michael/tmp-ct2fast-falcon-40b-instruct --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-falcon-40b-instruct"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# ✨ Falcon-40B-Instruct
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-40B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).**
* **Falcon-40B is the best open-source model available.** It outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is Falcon-40B-Instruct's little brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
# Model Card for Falcon-40B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-40B-Instruct has been finetuned on a chat dataset.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-40B-Instruct was finetuned on a 150M tokens from [Bai ze](https://github.com/project-baize/baize-chatbot) mixed with 5% of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) data.
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
For more information about pretraining, see [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).
### Model Architecture and Objective
Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 60 | |
| `d_model` | 8192 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.
#### Software
Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction dataset used for this model:
```
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
## License
Falcon-40B-Instruct is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae | 10,571 | [
[
-0.035369873046875,
-0.073486328125,
0.006900787353515625,
0.033203125,
-0.00640106201171875,
0.00090789794921875,
-0.01495361328125,
-0.0399169921875,
0.0129852294921875,
0.02520751953125,
-0.037322998046875,
-0.0273284912109375,
-0.05340576171875,
-0.00717... |
9wimu9/xlm-roberta-large-en-si-only-finetuned-sinquad-v12 | 2023-06-07T15:44:16.000Z | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | question-answering | 9wimu9 | null | null | 9wimu9/xlm-roberta-large-en-si-only-finetuned-sinquad-v12 | 0 | 2 | transformers | 2023-06-07T08:03:50 | ---
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-en-si-only-finetuned-sinquad-v12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-en-si-only-finetuned-sinquad-v12
This model is a fine-tuned version of [9wimu9/xlm-roberta-large-en-si-only](https://huggingface.co/9wimu9/xlm-roberta-large-en-si-only) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.612 | 1.0 | 93 | 0.8081 |
| 0.7979 | 2.0 | 186 | 0.6823 |
| 0.6887 | 3.0 | 279 | 0.6532 |
| 0.4166 | 4.0 | 372 | 0.6846 |
| 0.3985 | 5.0 | 465 | 0.7128 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.12.1
{'exact_match': 69.58841463414635, 'f1': 87.9274817360267} | 1,667 | [
[
-0.0310211181640625,
-0.041961669921875,
0.01276397705078125,
0.00690460205078125,
-0.026641845703125,
-0.032501220703125,
-0.026031494140625,
-0.0170135498046875,
0.00916290283203125,
0.033721923828125,
-0.05517578125,
-0.0469970703125,
-0.048797607421875,
... |
ISYS/MyNewModel | 2023-06-07T09:46:10.000Z | [
"keras",
"has_space",
"region:us"
] | null | ISYS | null | null | ISYS/MyNewModel | 0 | 2 | keras | 2023-06-07T09:12:45 | ---
library_name: keras
---
# Модель для распознавания цифр
Натренированная на датасете mnist
 | 107 | [
[
-0.00751495361328125,
-0.054443359375,
0.0110321044921875,
0.009552001953125,
-0.06036376953125,
0.0308685302734375,
0.02557373046875,
0.0208282470703125,
0.0777587890625,
0.0286865234375,
-0.03546142578125,
-0.044464111328125,
-0.04522705078125,
-0.00526046... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.