modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
krumeto/setfit-recipe-classifer | 2023-04-24T15:19:38.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | krumeto | null | null | krumeto/setfit-recipe-classifer | 1 | 2 | sentence-transformers | 2023-04-08T17:03:25 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# krumeto/setfit-recipe-classifer
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for classification how difficult is a given recipe. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("krumeto/setfit-recipe-classifer")
complicated_recipe = """Ingredients:
4 ounces pancetta, diced into 1/4 inch cubes
2 1/2 to 3 pounds veal shanks (4 to 6 pieces 2 to 3 inches thick)
1/2 cup diced onion
1/2 cup diced celery
1/2 cup diced carrot
3 garlic cloves , minced
1 1/2 cups canned chopped tomatoes
1 1/2 cups chicken broth
1/2 cup dry white wine
1 bay leaf
1 sprig fresh thyme
salt
freshly ground black pepper
all-purpose flour for dredging
2 tablespoons unsalted butter
2 tablespoons extra-virgin olive oil
4 3-inch strips of lemon zest
Directions:
Preheat oven to 375°F.
Heat the olive oil over medium heat in a large Dutch oven.
Cook pancetta until browned and crisp.
Remove pancetta with a slotted spoon and transfer to a paper towel-lined plate.
Season veal shanks with salt and pepper and dredge in flour.
Cook the veal until browned on all sides, working in batches if necessary, then transfer to a plate.
Add the onion, celery, carrot, garlic, and a pinch of salt to the Dutch oven and cook until softened.
Stir in the tomatoes, chicken broth, dry white wine, bay leaf, and thyme sprig.
Return the veal shanks and pancetta to the Dutch oven and bring the liquid to a simmer.
Cover the pot and place it in the oven to braise for 2-2 1/2 hours, until the veal is very tender.
Serve with gremolata and garnish with lemon zest strips.
Note: To make gremolata, finely chop 2 tablespoons fresh parsley, 1 tablespoon grated lemon zest, and 1 garlic clove. Mix together and sprinkle over the osso buco before serving."""
# Run inference
preds = model([complicated_recipe])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 3,025 | [
[
0.00263214111328125,
-0.058990478515625,
0.03656005859375,
0.021270751953125,
0.00576019287109375,
0.0011701583862304688,
-0.00885772705078125,
-0.0198211669921875,
0.0080718994140625,
0.04791259765625,
-0.026153564453125,
-0.04449462890625,
-0.038848876953125,
... |
HeroGeonil/Hypert-medical | 2023-05-27T12:53:48.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | HeroGeonil | null | null | HeroGeonil/Hypert-medical | 0 | 2 | transformers | 2023-04-08T18:16:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Hypernymy-Aware-BERT-Medical-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hypert-medical
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 36
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 216
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.10.1+cu111
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,147 | [
[
-0.0303497314453125,
-0.050018310546875,
0.0196075439453125,
0.008544921875,
-0.0357666015625,
-0.036865234375,
-0.01531982421875,
-0.0228729248046875,
0.0206756591796875,
0.028533935546875,
-0.052947998046875,
-0.042236328125,
-0.049224853515625,
-0.0050849... |
arkadyark/dqn-SpaceInvadersNoFrameskip-v4-default-params | 2023-04-08T19:07:34.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | arkadyark | null | null | arkadyark/dqn-SpaceInvadersNoFrameskip-v4-default-params | 0 | 2 | stable-baselines3 | 2023-04-08T19:06:47 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 373.50 +/- 194.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arkadyark -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga arkadyark -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga arkadyark
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,695 | [
[
-0.041412353515625,
-0.03607177734375,
0.0218048095703125,
0.0240325927734375,
-0.01031494140625,
-0.0175628662109375,
0.01241302490234375,
-0.0146331787109375,
0.01288604736328125,
0.0246429443359375,
-0.07080078125,
-0.03546142578125,
-0.027099609375,
-0.0... |
ratish/bert-textClassification_v1.1 | 2023-04-08T22:39:55.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ratish | null | null | ratish/bert-textClassification_v1.1 | 0 | 2 | transformers | 2023-04-08T21:13:32 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/bert-textClassification_v1.1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/bert-textClassification_v1.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2176
- Validation Loss: 1.4740
- Train Accuracy: 0.5909
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 95, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.2620 | 2.1136 | 0.3636 | 0 |
| 1.8161 | 1.8166 | 0.3864 | 1 |
| 1.4886 | 1.6061 | 0.5909 | 2 |
| 1.2862 | 1.5037 | 0.5909 | 3 |
| 1.2176 | 1.4740 | 0.5909 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,964 | [
[
-0.039825439453125,
-0.0386962890625,
0.0242919921875,
0.00545501708984375,
-0.024627685546875,
-0.0153961181640625,
-0.021209716796875,
-0.017791748046875,
0.00677490234375,
-0.0029277801513671875,
-0.052947998046875,
-0.049224853515625,
-0.059600830078125,
... |
lenayagaf/bert-buzzfeed-balanced | 2023-04-09T11:22:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | lenayagaf | null | null | lenayagaf/bert-buzzfeed-balanced | 0 | 2 | transformers | 2023-04-08T21:36:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-buzzfeed-balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-buzzfeed-balanced
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6343
- Accuracy: 0.6383
- F1: 0.6383
- Precision: 0.6818
- Recall: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 47 | 0.6265 | 0.6330 | 0.6333 | 0.6667 | 0.62 |
| No log | 2.0 | 94 | 0.6343 | 0.6383 | 0.6383 | 0.6818 | 0.6 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,591 | [
[
-0.032989501953125,
-0.04473876953125,
0.00858306884765625,
0.0241241455078125,
-0.0224456787109375,
-0.0281829833984375,
-0.018341064453125,
-0.02862548828125,
0.0198822021484375,
0.014678955078125,
-0.052734375,
-0.036376953125,
-0.048675537109375,
-0.0244... |
sadia72/roberta-base-finetuned-sarcasm-news-headline-detection | 2023-04-08T22:11:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | sadia72 | null | null | sadia72/roberta-base-finetuned-sarcasm-news-headline-detection | 0 | 2 | transformers | 2023-04-08T21:56:54 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-sarcasm-news-headline-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sarcasm-news-headline-detection
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2325 | 1.0 | 1789 | 0.1235 |
| 0.1525 | 2.0 | 3578 | 0.0767 |
| 0.0944 | 3.0 | 5367 | 0.0451 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,414 | [
[
-0.028594970703125,
-0.0533447265625,
0.0217742919921875,
0.00696563720703125,
-0.0249786376953125,
-0.03302001953125,
-0.01910400390625,
-0.01214599609375,
0.0008502006530761719,
0.034515380859375,
-0.060150146484375,
-0.045440673828125,
-0.05615234375,
-0.... |
approach0/splade_all-cocomae-220 | 2023-04-08T23:36:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"pretraining",
"azbert",
"fill-mask",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | approach0 | null | null | approach0/splade_all-cocomae-220 | 0 | 2 | transformers | 2023-04-08T23:35:58 | ---
language: en
tags:
- azbert
- pretraining
- fill-mask
widget:
- text: "$f$ $($ $x$ [MASK] $y$ $)$"
example_title: "mathy"
- text: "$x$ [MASK] $x$ $equal$ $2$ $x$"
example_title: "mathy"
- text: "Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "The goal of life is [MASK]."
example_title: "philosophical"
license: mit
---
## About
This [repository](https://github.com/approach0/azbert) is a boilerplate to push a mask-filling model to the HuggingFace Model Hub.
### Upload to huggingface
Download your tokenizer, model checkpoints, and optionally the training logs (`events.out.*`) to the `./ckpt` directory (do not include any large files except `pytorch_model.bin` and log files `events.out.*`).
Optionally, test model using the MLM task:
```sh
pip install pya0 # for math token preprocessing
# testing local checkpoints:
python test.py ./ckpt/math-tokenizer ./ckpt/2-2-0/encoder.ckpt
# testing Model Hub checkpoints:
python test.py approach0/coco-mae-220 approach0/coco-mae-220
```
> **Note**
> Modify the test examples in `test.txt` to play with it.
> The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups).
> A zero means no additional mask positions.
To upload to huggingface, use the `upload2hgf.sh` script.
Before runnig this script, be sure to check:
* `git-lfs` is installed
* having git-remote named `hgf` reference to `https://huggingface.co/your/repo`
* model contains all the files needed: `config.json` and `pytorch_model.bin`
* tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json`
* no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`)
| 1,964 | [
[
-0.03790283203125,
-0.04705810546875,
0.0060577392578125,
0.0325927734375,
-0.0110321044921875,
-0.0008606910705566406,
0.00016105175018310547,
-0.016815185546875,
0.03887939453125,
0.04632568359375,
-0.049530029296875,
-0.0406494140625,
-0.05340576171875,
-... |
ratish/bert-textClassification_v1.4 | 2023-04-09T00:26:36.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ratish | null | null | ratish/bert-textClassification_v1.4 | 0 | 2 | transformers | 2023-04-09T00:17:53 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/bert-textClassification_v1.4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/bert-textClassification_v1.4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3431
- Validation Loss: 0.8618
- Train Accuracy: 0.7273
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 285, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.2087 | 1.9909 | 0.4091 | 0 |
| 1.7130 | 1.6444 | 0.5909 | 1 |
| 1.3350 | 1.3844 | 0.5455 | 2 |
| 1.0642 | 1.2276 | 0.6136 | 3 |
| 0.8599 | 1.1036 | 0.6818 | 4 |
| 0.7216 | 1.0790 | 0.6818 | 5 |
| 0.6305 | 1.0403 | 0.6818 | 6 |
| 0.5304 | 0.9581 | 0.7045 | 7 |
| 0.4899 | 0.8977 | 0.7273 | 8 |
| 0.4332 | 0.8907 | 0.7273 | 9 |
| 0.4000 | 0.9072 | 0.7273 | 10 |
| 0.3740 | 0.8734 | 0.7273 | 11 |
| 0.3579 | 0.8726 | 0.7273 | 12 |
| 0.3448 | 0.8648 | 0.7273 | 13 |
| 0.3431 | 0.8618 | 0.7273 | 14 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,545 | [
[
-0.045623779296875,
-0.038665771484375,
0.0236663818359375,
0.0032863616943359375,
-0.017120361328125,
-0.01250457763671875,
-0.01361846923828125,
-0.01499176025390625,
0.017242431640625,
0.00432586669921875,
-0.0533447265625,
-0.05126953125,
-0.055908203125,
... |
erosendo/dqn-SpaceInvaders | 2023-04-09T03:48:25.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | erosendo | null | null | erosendo/dqn-SpaceInvaders | 0 | 2 | stable-baselines3 | 2023-04-09T03:47:47 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 493.50 +/- 106.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga erosendo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga erosendo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga erosendo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.04217529296875,
-0.037567138671875,
0.0215606689453125,
0.0253753662109375,
-0.00954437255859375,
-0.0189208984375,
0.013092041015625,
-0.01447296142578125,
0.01264190673828125,
0.0244598388671875,
-0.069091796875,
-0.034515380859375,
-0.0265960693359375,
... |
0x7194633/roberta-base-value-determinator | 2023-04-09T07:56:33.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 0x7194633 | null | null | 0x7194633/roberta-base-value-determinator | 0 | 2 | transformers | 2023-04-09T07:15:32 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-base-value-determinator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-value-determinator
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the 0x7194633/value_determinant dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,270 | [
[
-0.02703857421875,
-0.050567626953125,
0.0157470703125,
0.002445220947265625,
-0.0303497314453125,
-0.0280609130859375,
-0.0168609619140625,
-0.0143280029296875,
0.007648468017578125,
0.0244140625,
-0.049560546875,
-0.039794921875,
-0.05889892578125,
-0.0055... |
doggylion/distilbert-base-uncased-finetuned-emotion | 2023-04-09T15:54:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | doggylion | null | null | doggylion/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-09T12:09:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241955876397631
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8185 | 1.0 | 250 | 0.3127 | 0.9035 | 0.9002 |
| 0.2449 | 2.0 | 500 | 0.2180 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037811279296875,
-0.041259765625,
0.01416778564453125,
0.0219268798828125,
-0.025634765625,
-0.0188751220703125,
-0.01325225830078125,
-0.00885772705078125,
0.01067352294921875,
0.00830078125,
-0.056121826171875,
-0.0518798828125,
-0.0601806640625,
-0.008... |
XYang2023/distilbert-base-uncased-emotion | 2023-04-10T00:34:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | XYang2023 | null | null | XYang2023/distilbert-base-uncased-emotion | 0 | 2 | transformers | 2023-04-09T13:28:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: F1
type: f1
value: 0.9200802440853002
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2316
- F1: 0.9201
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 250 | 0.3294 | 0.9004 | 0.903 |
| No log | 2.0 | 500 | 0.2316 | 0.9201 | 0.92 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,818 | [
[
-0.03460693359375,
-0.039520263671875,
0.0164031982421875,
0.0259246826171875,
-0.0275421142578125,
-0.0167388916015625,
-0.0121002197265625,
-0.0088043212890625,
0.0121002197265625,
0.00997161865234375,
-0.056854248046875,
-0.05145263671875,
-0.05963134765625,
... |
andreaskoepf/pythia-6.9b-gpt4all-pretrain | 2023-04-09T14:33:08.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | andreaskoepf | null | null | andreaskoepf/pythia-6.9b-gpt4all-pretrain | 2 | 2 | transformers | 2023-04-09T13:35:52 | ---
license: apache-2.0
---
wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/kzy0gark
datasets:
```
pretrain:
num_train_epochs: 1
weight_decay: 0.0
use_custom_sampler: true
sort_by_length: false
datasets:
- joke
- webgpt:
val_split: 0.1
- gpt4all:
val_split: 0.01
- alpaca:
val_split: 0.025
- code_alpaca:
val_split: 0.05
- minimath
- humaneval_mbpp_codegen_qa
- humaneval_mbpp_testgen_qa
- grade_school_math_instructions
- recipes
- cmu_wiki_qa
- oa_wiki_qa_bart_10000row
- prosocial_dialogue:
fraction: 0.1
- explain_prosocial:
fraction: 0.05
- oig_file:
source_url: https://huggingface.co/datasets/laion/OIG/resolve/main/unified_chip2.jsonl
max_count: 10000
min_length: 250
val_split: 0.1
```
pythia:
```
pythia-6.9b-pretrain:
learning_rate: 6e-6
model_name: EleutherAI/pythia-6.9b-deduped
deepspeed_config: configs/zero3_config_pretrain.json
weight_decay: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: false
gradient_accumulation_steps: 2
per_device_train_batch_size: 5
per_device_eval_batch_size: 8
num_train_epochs: 1
save_total_limit: 2
```
command: `deepspeed trainer_sft.py --configs defaults pretrain pythia-6.9b-pretrain --cache_dir .cache/ --output_dir .saved_models/pythia-6.9b-pre --residual_dropout 0.0 --deepspeed` | 1,470 | [
[
-0.057464599609375,
-0.056060791015625,
0.01800537109375,
0.0170745849609375,
-0.0211944580078125,
-0.0182037353515625,
-0.016845703125,
0.003940582275390625,
0.01247406005859375,
0.0303192138671875,
-0.061370849609375,
-0.036346435546875,
-0.045684814453125,
... |
Bahasalab/BahasaGpt-chat | 2023-04-11T07:23:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"license:cc-by-nc-3.0",
"endpoints_compatible",
"region:us"
] | null | Bahasalab | null | null | Bahasalab/BahasaGpt-chat | 1 | 2 | transformers | 2023-04-09T13:44:42 | ---
license: cc-by-nc-3.0
---
# BahasaGPT-Chat
## Introduction
This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instructions based.
## Model Details
**Model Name:** BahasaGPT-Chat
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 120k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 120k Indonesian instructions, which were generated using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
**Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
**Bias:** The BahasaGPT-1 model, like other AI language models, can exhibit various forms of bias due to the data it was trained on. This includes, but is not limited to, gender, racial, and cultural biases. As a result, the model may generate outputs that perpetuate stereotypes, exhibit unfair treatment, or show preference for specific groups or perspectives. Efforts have been made to mitigate these biases, but they may still be present in the model's responses.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 120k Indonesian instructions generated using using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. Despite some limitations, such as occasional hallucination, the model provides a valuable tool for working with Indonesian language tasks.
## How to Run
For Gradio Demo : [Gradio Code](https://github.com/acul3/Bahasa_Chat)
For Colab Using (Int8) : [Colab](https://colab.research.google.com/drive/1yvhJENcd0NKuMZNipAJVP4eP-k7-ilXj?usp=sharing) | 2,844 | [
[
-0.03790283203125,
-0.09820556640625,
0.0011310577392578125,
0.04107666015625,
-0.0191497802734375,
-0.0251312255859375,
-0.02178955078125,
-0.034820556640625,
0.01094818115234375,
0.053955078125,
-0.058135986328125,
-0.0311431884765625,
-0.050384521484375,
... |
Mukhtadir/distilbert-base-uncased-finetuned-emotion | 2023-04-10T10:48:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Mukhtadir | null | null | Mukhtadir/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-09T14:01:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9276531435070997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9275
- F1: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8335 | 1.0 | 250 | 0.3113 | 0.904 | 0.9007 |
| 0.2492 | 2.0 | 500 | 0.2144 | 0.9275 | 0.9277 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03802490234375,
-0.041351318359375,
0.015533447265625,
0.021636962890625,
-0.0262603759765625,
-0.0191650390625,
-0.0130615234375,
-0.0086212158203125,
0.01030731201171875,
0.00864410400390625,
-0.055908203125,
-0.051483154296875,
-0.059722900390625,
-0.0... |
rukshanCodeGen/dummp | 2023-04-09T17:44:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rukshanCodeGen | null | null | rukshanCodeGen/dummp | 0 | 2 | transformers | 2023-04-09T14:20:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: dummp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.638
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4199
- Accuracy: 0.638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.9293 | 0.607 |
| No log | 2.0 | 250 | 1.0291 | 0.626 |
| No log | 3.0 | 375 | 1.2118 | 0.628 |
| No log | 4.0 | 500 | 1.3472 | 0.633 |
| No log | 5.0 | 625 | 1.4199 | 0.638 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,879 | [
[
-0.032318115234375,
-0.043365478515625,
0.015167236328125,
0.00855255126953125,
-0.0250244140625,
-0.036407470703125,
-0.01322174072265625,
-0.0194854736328125,
0.01148223876953125,
0.0234222412109375,
-0.059661865234375,
-0.046966552734375,
-0.041229248046875,
... |
madmancity/revmlc | 2023-04-11T18:21:02.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"sentiment-analysis",
"en",
"dataset:madmancity/revmlc",
"endpoints_compatible",
"region:us"
] | text-classification | madmancity | null | null | madmancity/revmlc | 0 | 2 | transformers | 2023-04-09T14:40:11 | ---
tags:
- text-classification
- sentiment-analysis
language:
- en
widget:
- text: "I love this product! One of my best purchases this year."
datasets:
- madmancity/revmlc
---
## Validation Metrics
- Loss: 0.595
- Accuracy: 0.789
- Macro F1: 0.575
- Micro F1: 0.789
- Weighted F1: 0.763
- Macro Precision: 0.630
- Micro Precision: 0.789
- Weighted Precision: 0.775
- Macro Recall: 0.588
- Micro Recall: 0.789
- Weighted Recall: 0.789
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love this product! One of my best purchases this year."}' https://api-inference.huggingface.co/models/madmancity/revmlc
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madmancity/revmlc", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madmancity/revmlc", use_auth_token=True)
inputs = tokenizer("I love this product! One of my best purchases this year.", return_tensors="pt")
outputs = model(**inputs)
``` | 1,141 | [
[
-0.036865234375,
-0.038330078125,
0.011871337890625,
0.0323486328125,
-0.002285003662109375,
-0.0032482147216796875,
-0.004611968994140625,
-0.00974273681640625,
0.01418304443359375,
0.0102386474609375,
-0.0601806640625,
-0.058380126953125,
-0.04351806640625,
... |
ItchyB/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-09T20:08:40.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | ItchyB | null | null | ItchyB/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-09T15:17:38 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 274.50 +/- 31.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ItchyB -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ItchyB -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ItchyB
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,681 | [
[
-0.0406494140625,
-0.03692626953125,
0.0219268798828125,
0.02545166015625,
-0.009796142578125,
-0.017333984375,
0.01224517822265625,
-0.01348876953125,
0.01506805419921875,
0.02447509765625,
-0.07000732421875,
-0.0355224609375,
-0.0270538330078125,
-0.005256... |
Phoshco/ADHDvsN | 2023-04-09T16:57:14.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/ADHDvsN | 0 | 2 | transformers | 2023-04-09T15:45:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ADHDvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ADHDvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7460
- F1: 0.684
- Roc Auc: 0.6836
- Accuracy: 0.684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6333 | 1.0 | 875 | 0.6383 | 0.6368 | 0.6321 | 0.6368 |
| 0.591 | 2.0 | 1750 | 0.6384 | 0.6925 | 0.6926 | 0.6925 |
| 0.5103 | 3.0 | 2625 | 0.6349 | 0.6827 | 0.6855 | 0.6827 |
| 0.4122 | 4.0 | 3500 | 0.6424 | 0.668 | 0.6658 | 0.668 |
| 0.3287 | 5.0 | 4375 | 0.7460 | 0.684 | 0.6836 | 0.684 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,711 | [
[
-0.040740966796875,
-0.05078125,
0.00978851318359375,
0.001953125,
-0.0177764892578125,
-0.03216552734375,
-0.004497528076171875,
-0.01045989990234375,
0.0239715576171875,
0.0256805419921875,
-0.063232421875,
-0.049530029296875,
-0.047637939453125,
-0.022689... |
xhorvat9/LTR_BERT_512_noTSD | 2023-04-09T17:37:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | xhorvat9 | null | null | xhorvat9/LTR_BERT_512_noTSD | 0 | 2 | transformers | 2023-04-09T15:59:42 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [zhihan1996/DNA_bert_6](https://huggingface.co/zhihan1996/DNA_bert_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3879
- Accuracy: 0.8612
- Precision: 0.9154
- Recall: 0.8240
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4958 | 0.46 | 500 | 0.4749 | 0.7831 | 0.7303 | 0.9606 | 0.8298 |
| 0.3928 | 0.93 | 1000 | 0.4086 | 0.8207 | 0.7717 | 0.9574 | 0.8546 |
| 0.3319 | 1.39 | 1500 | 0.3467 | 0.8635 | 0.8664 | 0.8891 | 0.8776 |
| 0.3036 | 1.85 | 2000 | 0.3176 | 0.8702 | 0.8717 | 0.8960 | 0.8836 |
| 0.2383 | 2.31 | 2500 | 0.3403 | 0.8707 | 0.8901 | 0.8728 | 0.8814 |
| 0.2189 | 2.78 | 3000 | 0.3879 | 0.8612 | 0.9154 | 0.8240 | 0.8673 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 1,901 | [
[
-0.04217529296875,
-0.039459228515625,
0.01263427734375,
-0.00362396240234375,
-0.0245513916015625,
-0.025665283203125,
-0.0086517333984375,
-0.0153350830078125,
0.0206146240234375,
0.016571044921875,
-0.060455322265625,
-0.045013427734375,
-0.046295166015625,
... |
JaviBJ/ppo-SnowballTarget | 2023-04-09T17:23:07.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | JaviBJ | null | null | JaviBJ/ppo-SnowballTarget | 0 | 2 | ml-agents | 2023-04-09T17:23:02 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: JaviBJ/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 985 | [
[
-0.0164642333984375,
-0.02783203125,
0.007312774658203125,
0.0165863037109375,
-0.0221710205078125,
0.0170440673828125,
0.021820068359375,
-0.00634765625,
0.0258331298828125,
0.03900146484375,
-0.053070068359375,
-0.05596923828125,
-0.041351318359375,
-0.017... |
FCameCode/BERT_model | 2023-04-10T11:25:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | FCameCode | null | null | FCameCode/BERT_model | 0 | 2 | transformers | 2023-04-09T17:37:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1260
- Accuracy: 0.9679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0934 | 1.0 | 1995 | 0.0993 | 0.9683 |
| 0.0575 | 2.0 | 3990 | 0.1079 | 0.9695 |
| 0.033 | 3.0 | 5985 | 0.1260 | 0.9679 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,436 | [
[
-0.03204345703125,
-0.045928955078125,
0.01505279541015625,
0.011627197265625,
-0.0277252197265625,
-0.03680419921875,
-0.0172271728515625,
-0.02215576171875,
0.00977325439453125,
0.0238037109375,
-0.054595947265625,
-0.046356201171875,
-0.047882080078125,
-... |
OtherBrian/distilbert-base-uncased-finetuned-emotion | 2023-04-10T15:26:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | OtherBrian | null | null | OtherBrian/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-09T19:15:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240896354671038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2328
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8867 | 1.0 | 250 | 0.3406 | 0.9025 | 0.8973 |
| 0.2671 | 2.0 | 500 | 0.2328 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.0380859375,
-0.042144775390625,
0.0158538818359375,
0.022003173828125,
-0.026641845703125,
-0.0194854736328125,
-0.01329803466796875,
-0.0088348388671875,
0.01056671142578125,
0.00878143310546875,
-0.0565185546875,
-0.051788330078125,
-0.0596923828125,
-0... |
ValenHumano/roberta-base-bne-finetuned-amazon_reviews_multi | 2023-04-09T22:13:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | ValenHumano | null | null | ValenHumano/roberta-base-bne-finetuned-amazon_reviews_multi | 1 | 2 | transformers | 2023-04-09T21:47:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2233
- Accuracy: 0.933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1956 | 1.0 | 1250 | 0.1798 | 0.9323 |
| 0.107 | 2.0 | 2500 | 0.2233 | 0.933 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,792 | [
[
-0.039459228515625,
-0.048736572265625,
0.0093994140625,
0.0140380859375,
-0.027496337890625,
-0.0303497314453125,
-0.016845703125,
-0.01898193359375,
0.0080413818359375,
0.028045654296875,
-0.05072021484375,
-0.045562744140625,
-0.05316162109375,
-0.0096282... |
JyaouShingan/distilbert-base-uncased-local-finetuned-emotion | 2023-04-12T03:51:49.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | JyaouShingan | null | null | JyaouShingan/distilbert-base-uncased-local-finetuned-emotion | 0 | 2 | transformers | 2023-04-10T01:09:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-local-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264142965360822
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-local-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2191
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3158 | 0.905 | 0.9026 |
| No log | 2.0 | 500 | 0.2191 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,854 | [
[
-0.03680419921875,
-0.042755126953125,
0.013458251953125,
0.0204620361328125,
-0.024505615234375,
-0.0205078125,
-0.01488494873046875,
-0.0124053955078125,
0.011993408203125,
0.0128021240234375,
-0.0531005859375,
-0.05230712890625,
-0.054168701171875,
-0.006... |
Phoshco/allvsN | 2023-04-10T06:02:33.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/allvsN | 0 | 2 | transformers | 2023-04-10T03:43:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: allvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# allvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5592
- F1: 0.3265
- Accuracy: 0.3265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 1.7706 | 1.0 | 1750 | 1.7077 | 0.3169 | 0.3169 |
| 1.621 | 2.0 | 3500 | 1.6943 | 0.3396 | 0.3396 |
| 1.3775 | 3.0 | 5250 | 1.7806 | 0.3458 | 0.3458 |
| 0.9342 | 4.0 | 7000 | 2.0859 | 0.3406 | 0.3406 |
| 0.5596 | 5.0 | 8750 | 2.5592 | 0.3265 | 0.3265 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,623 | [
[
-0.034423828125,
-0.03997802734375,
0.01117706298828125,
0.012298583984375,
-0.025909423828125,
-0.036529541015625,
-0.01027679443359375,
-0.0105743408203125,
0.015869140625,
0.027313232421875,
-0.05810546875,
-0.051177978515625,
-0.04510498046875,
-0.019821... |
rwang5688/distilbert-base-uncased-finetuned-sst2-pt | 2023-09-25T06:42:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | rwang5688 | null | null | rwang5688/distilbert-base-uncased-finetuned-sst2-pt | 1 | 2 | transformers | 2023-04-10T04:35:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-pt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9071100917431193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-pt
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4661
- Accuracy: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1863 | 1.0 | 4210 | 0.3161 | 0.8991 |
| 0.1237 | 2.0 | 8420 | 0.3776 | 0.8956 |
| 0.0997 | 3.0 | 12630 | 0.3770 | 0.9025 |
| 0.0609 | 4.0 | 16840 | 0.4661 | 0.9071 |
| 0.0376 | 5.0 | 21050 | 0.5535 | 0.9014 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,923 | [
[
-0.0213775634765625,
-0.04766845703125,
0.0142364501953125,
0.013153076171875,
-0.029632568359375,
-0.01517486572265625,
-0.0092620849609375,
-0.001857757568359375,
0.0059051513671875,
0.0123291015625,
-0.04718017578125,
-0.038909912109375,
-0.06292724609375,
... |
Phoshco/bipolarvsN | 2023-04-10T09:02:28.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/bipolarvsN | 0 | 2 | transformers | 2023-04-10T07:44:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bipolarvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bipolarvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7437
- F1: 0.7833
- Roc Auc: 0.7818
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5251 | 1.0 | 875 | 0.5404 | 0.736 | 0.7299 | 0.736 |
| 0.4396 | 2.0 | 1750 | 0.4694 | 0.7974 | 0.7966 | 0.7974 |
| 0.373 | 3.0 | 2625 | 0.5041 | 0.797 | 0.7963 | 0.797 |
| 0.2828 | 4.0 | 3500 | 0.6178 | 0.7939 | 0.7931 | 0.7939 |
| 0.2147 | 5.0 | 4375 | 0.7437 | 0.7833 | 0.7818 | 0.7833 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,719 | [
[
-0.03875732421875,
-0.042877197265625,
0.01053619384765625,
0.0131072998046875,
-0.029998779296875,
-0.0225982666015625,
-0.00484466552734375,
-0.003910064697265625,
0.0208587646484375,
0.03509521484375,
-0.060150146484375,
-0.058563232421875,
-0.048309326171875... |
Amite5h/TextClassificationmulticlass | 2023-04-10T08:52:58.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Amite5h | null | null | Amite5h/TextClassificationmulticlass | 1 | 2 | transformers | 2023-04-10T08:43:47 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TextClassificationmulticlass
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TextClassificationmulticlass
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 1,287 | [
[
-0.038909912109375,
-0.0521240234375,
0.024261474609375,
0.0029773712158203125,
-0.0411376953125,
-0.0109100341796875,
-0.0156097412109375,
-0.01605224609375,
0.006259918212890625,
0.008026123046875,
-0.044219970703125,
-0.052642822265625,
-0.0648193359375,
... |
chanelcolgate/vit-base-patch16-224-chest-x-ray | 2023-04-10T09:08:15.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:chest-xray-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | chanelcolgate | null | null | chanelcolgate/vit-base-patch16-224-chest-x-ray | 0 | 2 | transformers | 2023-04-10T08:45:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- chest-xray-classification
model-index:
- name: vit-base-patch16-224-chest-x-ray
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-chest-x-ray
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the chest-xray-classification dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,114 | [
[
-0.028289794921875,
-0.037353515625,
0.01309967041015625,
-0.00955963134765625,
-0.04412841796875,
-0.0261993408203125,
0.0096435546875,
-0.0160675048828125,
0.0113372802734375,
0.03662109375,
-0.050567626953125,
-0.0406494140625,
-0.050323486328125,
-0.0151... |
GhifSmile/xlm-roberta-base-uncased-PINA | 2023-04-10T10:11:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | GhifSmile | null | null | GhifSmile/xlm-roberta-base-uncased-PINA | 0 | 2 | transformers | 2023-04-10T08:54:45 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: xlm-roberta-base-uncased-PINA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-uncased-PINA
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0862
- Accuracy: 0.7553
- Precision: 0.5016
- Recall: 0.4522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 2.7204 | 1.0 | 234 | 2.4959 | 0.4220 | 0.0124 | 0.0269 |
| 2.2553 | 2.0 | 468 | 1.9819 | 0.5 | 0.0498 | 0.0802 |
| 1.9593 | 3.0 | 702 | 1.7527 | 0.5513 | 0.1222 | 0.1377 |
| 1.6947 | 4.0 | 936 | 1.5375 | 0.6325 | 0.2466 | 0.2480 |
| 1.4593 | 5.0 | 1170 | 1.3773 | 0.6848 | 0.4074 | 0.3414 |
| 1.2381 | 6.0 | 1404 | 1.2560 | 0.7094 | 0.4273 | 0.3638 |
| 1.0986 | 7.0 | 1638 | 1.1813 | 0.7286 | 0.4396 | 0.4033 |
| 0.9817 | 8.0 | 1872 | 1.1668 | 0.7361 | 0.4824 | 0.4345 |
| 0.8894 | 9.0 | 2106 | 1.1054 | 0.7521 | 0.5155 | 0.4461 |
| 0.8518 | 10.0 | 2340 | 1.0862 | 0.7553 | 0.5016 | 0.4522 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,210 | [
[
-0.03680419921875,
-0.042144775390625,
0.0216064453125,
0.0018157958984375,
-0.0141143798828125,
-0.01934814453125,
-0.005767822265625,
-0.0115814208984375,
0.0234832763671875,
0.031982421875,
-0.04998779296875,
-0.052520751953125,
-0.053131103515625,
-0.011... |
Seungjun/articleGeneratorV1.0 | 2023-04-10T10:19:56.000Z | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Seungjun | null | null | Seungjun/articleGeneratorV1.0 | 1 | 2 | transformers | 2023-04-10T09:02:51 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: articleGeneratorV1.0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# What does model do and how to use it
Just provide an title to the model and it will generate a whole article about it.
```python
# Install transformers library
!pip install transformers
```
```python
# Load tokenizer and model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
model_name = "Seungjun/articleGeneratorV1.0"
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)
```
```python
# Get the article for a given title
from transformers import pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
"Steve Jobs", # title
min_length=500,
max_length=1024,
)
```
Result:
# Current limitation of the model
It generate aot of lies. 99% of the word generated by this model is not true.
# articleGeneratorV1.0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9568
- Validation Loss: 3.6096
- Train Rougel: tf.Tensor(0.08172019, shape=(), dtype=float32)
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:-----------------------------------------------:|:-----:|
| 4.9218 | 4.0315 | tf.Tensor(0.08038119, shape=(), dtype=float32) | 0 |
| 4.2887 | 3.8366 | tf.Tensor(0.08103053, shape=(), dtype=float32) | 1 |
| 4.1269 | 3.7328 | tf.Tensor(0.081041485, shape=(), dtype=float32) | 2 |
| 4.0276 | 3.6614 | tf.Tensor(0.081364945, shape=(), dtype=float32) | 3 |
| 3.9568 | 3.6096 | tf.Tensor(0.08172019, shape=(), dtype=float32) | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,875 | [
[
-0.031494140625,
-0.0384521484375,
0.02978515625,
-0.0003437995910644531,
-0.0205841064453125,
-0.022705078125,
-0.0007805824279785156,
-0.01409912109375,
0.01947021484375,
0.0169830322265625,
-0.04400634765625,
-0.0545654296875,
-0.05584716796875,
-0.007942... |
Phoshco/depressionvsN | 2023-04-10T10:24:59.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/depressionvsN | 0 | 2 | transformers | 2023-04-10T09:06:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: depressionvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depressionvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1005
- F1: 0.6615
- Roc Auc: 0.6610
- Accuracy: 0.6615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6688 | 1.0 | 875 | 0.6239 | 0.6552 | 0.6546 | 0.6552 |
| 0.5832 | 2.0 | 1750 | 0.5966 | 0.6786 | 0.6789 | 0.6786 |
| 0.4778 | 3.0 | 2625 | 0.6958 | 0.6791 | 0.6795 | 0.6791 |
| 0.3487 | 4.0 | 3500 | 0.7418 | 0.6637 | 0.6617 | 0.6637 |
| 0.2266 | 5.0 | 4375 | 1.1005 | 0.6615 | 0.6610 | 0.6615 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,725 | [
[
-0.039581298828125,
-0.0439453125,
0.0132293701171875,
0.011993408203125,
-0.0229644775390625,
-0.031463623046875,
-0.00940704345703125,
-0.0099334716796875,
0.0216217041015625,
0.0302581787109375,
-0.06268310546875,
-0.0540771484375,
-0.047821044921875,
-0.... |
sakethchalla/isl-nodel | 2023-04-10T10:34:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | sakethchalla | null | null | sakethchalla/isl-nodel | 0 | 2 | transformers | 2023-04-10T09:51:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: isl-nodel
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7540407589599438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# isl-nodel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9554
- Accuracy: 0.7540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6213 | 1.0 | 89 | 2.3886 | 0.6128 |
| 1.66 | 2.0 | 178 | 1.5769 | 0.7119 |
| 1.3588 | 3.0 | 267 | 1.3264 | 0.7358 |
| 1.1062 | 4.0 | 356 | 1.1833 | 0.7386 |
| 1.1883 | 5.0 | 445 | 1.1025 | 0.7442 |
| 1.159 | 6.0 | 534 | 1.0324 | 0.7505 |
| 0.9934 | 7.0 | 623 | 0.9626 | 0.7674 |
| 0.8885 | 8.0 | 712 | 1.0080 | 0.7435 |
| 0.9325 | 9.0 | 801 | 0.9395 | 0.7681 |
| 0.9254 | 10.0 | 890 | 0.9554 | 0.7540 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,309 | [
[
-0.0318603515625,
-0.04571533203125,
0.007221221923828125,
0.006542205810546875,
-0.0262908935546875,
-0.02020263671875,
-0.0015058517456054688,
-0.017120361328125,
0.01224517822265625,
0.0220794677734375,
-0.05303955078125,
-0.0513916015625,
-0.051239013671875,... |
Phoshco/EDAnonymousvsN | 2023-04-10T11:47:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/EDAnonymousvsN | 0 | 2 | transformers | 2023-04-10T10:28:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: EDAnonymousvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EDAnonymousvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4876
- F1: 0.8914
- Roc Auc: 0.8899
- Accuracy: 0.8914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3665 | 1.0 | 875 | 0.3116 | 0.889 | 0.8858 | 0.889 |
| 0.253 | 2.0 | 1750 | 0.2832 | 0.8884 | 0.8866 | 0.8884 |
| 0.2082 | 3.0 | 2625 | 0.3573 | 0.8934 | 0.8915 | 0.8934 |
| 0.1422 | 4.0 | 3500 | 0.4506 | 0.8932 | 0.8926 | 0.8932 |
| 0.0953 | 5.0 | 4375 | 0.4876 | 0.8914 | 0.8899 | 0.8914 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,727 | [
[
-0.037353515625,
-0.045928955078125,
0.01233673095703125,
0.00923919677734375,
-0.0251312255859375,
-0.03302001953125,
-0.01062774658203125,
-0.0124969482421875,
0.0245513916015625,
0.0307159423828125,
-0.061798095703125,
-0.0556640625,
-0.047119140625,
-0.0... |
mnavas/roberta-finetuned-solvencia-v1 | 2023-04-12T13:27:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | mnavas | null | null | mnavas/roberta-finetuned-solvencia-v1 | 0 | 2 | transformers | 2023-04-10T10:49:56 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-solvencia-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-solvencia-v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4141
- Accuracy: 0.8919
- F1: 0.8919
- Precision: 0.8919
- Recall: 0.8919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 333 | 0.3781 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
| 0.4429 | 2.0 | 666 | 0.3295 | 0.8679 | 0.8679 | 0.8679 | 0.8679 |
| 0.4429 | 3.0 | 999 | 0.3664 | 0.8784 | 0.8784 | 0.8784 | 0.8784 |
| 0.3512 | 4.0 | 1332 | 0.4602 | 0.8649 | 0.8649 | 0.8649 | 0.8649 |
| 0.2975 | 5.0 | 1665 | 0.4721 | 0.8889 | 0.8889 | 0.8889 | 0.8889 |
| 0.2975 | 6.0 | 1998 | 0.4141 | 0.8919 | 0.8919 | 0.8919 | 0.8919 |
| 0.2499 | 7.0 | 2331 | 0.4054 | 0.8889 | 0.8889 | 0.8889 | 0.8889 |
| 0.2132 | 8.0 | 2664 | 0.4878 | 0.8829 | 0.8829 | 0.8829 | 0.8829 |
| 0.2132 | 9.0 | 2997 | 0.4867 | 0.8904 | 0.8904 | 0.8904 | 0.8904 |
| 0.1812 | 10.0 | 3330 | 0.5339 | 0.8889 | 0.8889 | 0.8889 | 0.8889 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
| 2,330 | [
[
-0.036529541015625,
-0.0426025390625,
0.015380859375,
0.0016307830810546875,
-0.0178985595703125,
-0.017913818359375,
-0.003856658935546875,
-0.01021575927734375,
0.0266265869140625,
0.0318603515625,
-0.058013916015625,
-0.051025390625,
-0.053009033203125,
-... |
Phoshco/ptsdvsN | 2023-04-10T13:26:28.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Phoshco | null | null | Phoshco/ptsdvsN | 0 | 2 | transformers | 2023-04-10T12:08:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ptsdvsN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ptsdvsN
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0050
- F1: 0.8051
- Roc Auc: 0.8042
- Accuracy: 0.8051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4907 | 1.0 | 875 | 0.4679 | 0.8161 | 0.8164 | 0.8161 |
| 0.3568 | 2.0 | 1750 | 0.4654 | 0.8221 | 0.8225 | 0.8221 |
| 0.2289 | 3.0 | 2625 | 0.7412 | 0.7843 | 0.7800 | 0.7843 |
| 0.1246 | 4.0 | 3500 | 0.8720 | 0.8013 | 0.7995 | 0.8013 |
| 0.0656 | 5.0 | 4375 | 1.0050 | 0.8051 | 0.8042 | 0.8051 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| 1,713 | [
[
-0.037078857421875,
-0.04510498046875,
0.01284027099609375,
0.010284423828125,
-0.02587890625,
-0.031494140625,
-0.00720977783203125,
-0.0105133056640625,
0.0198516845703125,
0.03106689453125,
-0.060211181640625,
-0.05322265625,
-0.048858642578125,
-0.016708... |
justinsiow/PyramdisRND | 2023-04-10T14:20:05.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | justinsiow | null | null | justinsiow/PyramdisRND | 0 | 2 | ml-agents | 2023-04-10T14:18:06 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: justinsiow/PyramdisRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 952 | [
[
-0.0265960693359375,
-0.020050048828125,
0.0011968612670898438,
0.026275634765625,
-0.00969696044921875,
0.005889892578125,
0.0276336669921875,
-0.0035953521728515625,
0.0347900390625,
0.036651611328125,
-0.036895751953125,
-0.050537109375,
-0.036346435546875,
... |
AyoubChLin/distilbert_cnn_news | 2023-05-07T15:00:42.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:AyoubChLin/CNN_News_Articles_2011-2022",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/distilbert_cnn_news | 1 | 2 | transformers | 2023-04-10T14:52:20 | ---
license: apache-2.0
datasets:
- AyoubChLin/CNN_News_Articles_2011-2022
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: money in the pocket
- text: no one can win this cup in quatar..
- text: Health is an essential aspect of our lives that affects us physically, mentally, and emotionally. Maintaining good health requires us to make healthy lifestyle choices, including eating a balanced diet, getting regular exercise, and getting enough sleep. These habits can help reduce the risk of developing chronic diseases such as diabetes, heart disease, and cancer.
---
## DistilBertForSequenceClassification on CNN News Dataset
This repository contains a fine-tuned DistilBert base model for sequence classification on the CNN News dataset. The model is able to classify news articles into one of six categories: business, entertainment, health, news, politics, and sport.
The model was fine-tuned for four epochs achieving a training loss of 0.012900, a validation loss of 0.151663,
- accuracy of 0.9607394366197183.
- f1 : 0.962072
- precision : 0.961904
- recall : 0.962324
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/)
- **Shared by [optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **Finetuned from model [optional]:** [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
### Usage
You can use this model with the Hugging Face Transformers library for a variety of natural language processing tasks, such as text classification, sentiment analysis, and more.
Here's an example of how to use this model for text classification in Python:
``` python
from transformers import AutoTokenizer, DistilBertForSequenceClassification
model_name = "AyoubChLin/distilbert_cnn_news"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
text = "This is a news article about politics."
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
```
In this example, we first load the tokenizer and the model using their respective from_pretrained methods. We then encode a news article using the tokenizer, pass the inputs through the model, and extract the predicted label using the argmax function. Finally, we map the predicted label to its corresponding category using a list of labels.
### Contributors
This model was fine-tuned by CHERGUELAINE Ayoub and BOUBEKRI Faycal. | 2,781 | [
[
-0.0245819091796875,
-0.047637939453125,
0.00920867919921875,
0.01549530029296875,
-0.0177001953125,
0.005886077880859375,
-0.006946563720703125,
-0.0172576904296875,
-0.00508880615234375,
0.012420654296875,
-0.028564453125,
-0.049346923828125,
-0.064208984375,
... |
galkowskim/distilbert-base-uncased-finetuned-emotions | 2023-04-10T16:13:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | galkowskim | null | null | galkowskim/distilbert-base-uncased-finetuned-emotions | 0 | 2 | transformers | 2023-04-10T15:45:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255657653416817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2190
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8373 | 1.0 | 250 | 0.3222 | 0.903 | 0.8990 |
| 0.2494 | 2.0 | 500 | 0.2190 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,850 | [
[
-0.039337158203125,
-0.041473388671875,
0.01485443115234375,
0.021209716796875,
-0.0269927978515625,
-0.0194854736328125,
-0.0135345458984375,
-0.0081787109375,
0.007419586181640625,
0.007656097412109375,
-0.057647705078125,
-0.051605224609375,
-0.05868530273437... |
Svetlana0303/regression_Albert_1500 | 2023-04-10T15:54:44.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/regression_Albert_1500 | 0 | 2 | transformers | 2023-04-10T15:54:38 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_1500
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_1500
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3665
- Train Mae: 0.5651
- Train Mse: 0.4539
- Train R2-score: 0.5632
- Validation Loss: 0.3640
- Validation Mae: 0.6123
- Validation Mse: 0.4470
- Validation R2-score: 0.5765
- Epoch: 22
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.3911 | 0.5811 | 0.4875 | 0.5636 | 0.3808 | 0.6393 | 0.4778 | 0.4775 | 0 |
| 0.3669 | 0.5644 | 0.4527 | 0.6196 | 0.3524 | 0.5673 | 0.4286 | 0.6944 | 1 |
| 0.3652 | 0.5606 | 0.4457 | 0.6645 | 0.3711 | 0.6253 | 0.4600 | 0.5315 | 2 |
| 0.3669 | 0.5642 | 0.4490 | 0.5194 | 0.3525 | 0.5695 | 0.4286 | 0.6901 | 3 |
| 0.3693 | 0.5693 | 0.4580 | 0.6646 | 0.3558 | 0.5904 | 0.4329 | 0.6414 | 4 |
| 0.3682 | 0.5633 | 0.4540 | 0.7464 | 0.3602 | 0.5255 | 0.4485 | 0.7509 | 5 |
| 0.3712 | 0.5632 | 0.4527 | 0.6645 | 0.3650 | 0.6145 | 0.4489 | 0.5693 | 6 |
| 0.3781 | 0.5720 | 0.4661 | 0.5801 | 0.3545 | 0.5849 | 0.4309 | 0.6553 | 7 |
| 0.3659 | 0.5673 | 0.4564 | 0.1693 | 0.3723 | 0.6271 | 0.4621 | 0.5247 | 8 |
| 0.3693 | 0.5642 | 0.4487 | 0.7048 | 0.3524 | 0.5641 | 0.4289 | 0.7006 | 9 |
| 0.3656 | 0.5655 | 0.4495 | 0.6565 | 0.3575 | 0.5328 | 0.4425 | 0.7448 | 10 |
| 0.3685 | 0.5632 | 0.4540 | 0.7202 | 0.3551 | 0.5878 | 0.4319 | 0.6482 | 11 |
| 0.3702 | 0.5646 | 0.4543 | 0.7295 | 0.3528 | 0.5557 | 0.4306 | 0.7152 | 12 |
| 0.3661 | 0.5615 | 0.4450 | 0.6631 | 0.3683 | 0.5240 | 0.4664 | 0.7592 | 13 |
| 0.3835 | 0.5742 | 0.4757 | 0.7335 | 0.3531 | 0.5523 | 0.4316 | 0.7206 | 14 |
| 0.3641 | 0.5628 | 0.4472 | 0.7325 | 0.3559 | 0.5909 | 0.4331 | 0.6399 | 15 |
| 0.3764 | 0.5633 | 0.4566 | 0.7291 | 0.3549 | 0.5867 | 0.4315 | 0.6508 | 16 |
| 0.3625 | 0.5594 | 0.4443 | 0.5555 | 0.3648 | 0.6141 | 0.4486 | 0.5707 | 17 |
| 0.3816 | 0.5743 | 0.4693 | 0.6649 | 0.3559 | 0.5385 | 0.4385 | 0.7389 | 18 |
| 0.3721 | 0.5721 | 0.4618 | 0.6791 | 0.3529 | 0.5745 | 0.4288 | 0.6795 | 19 |
| 0.3711 | 0.5659 | 0.4586 | 0.2709 | 0.3610 | 0.5234 | 0.4505 | 0.7525 | 20 |
| 0.3693 | 0.5641 | 0.4501 | 0.7400 | 0.3525 | 0.5607 | 0.4294 | 0.7068 | 21 |
| 0.3665 | 0.5651 | 0.4539 | 0.5632 | 0.3640 | 0.6123 | 0.4470 | 0.5765 | 22 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 4,919 | [
[
-0.049835205078125,
-0.0455322265625,
0.0177154541015625,
0.0016050338745117188,
-0.00238800048828125,
0.0010175704956054688,
-0.0016937255859375,
-0.005374908447265625,
0.051910400390625,
0.02447509765625,
-0.048919677734375,
-0.047576904296875,
-0.049835205078... |
Roguwan/DialoGPT-medium-rogu | 2023-04-10T20:58:16.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | conversational | Roguwan | null | null | Roguwan/DialoGPT-medium-rogu | 0 | 2 | transformers | 2023-04-10T17:12:09 | ---
tags:
- conversational
license: mit
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | 1,251 | [
[
-0.0238494873046875,
-0.044464111328125,
-0.0007958412170410156,
0.0179595947265625,
-0.0136260986328125,
0.00777435302734375,
-0.003513336181640625,
0.00392913818359375,
0.0161590576171875,
0.01512908935546875,
-0.048614501953125,
-0.0193939208984375,
-0.059997... |
jmurphy97/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-10T20:52:16.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | jmurphy97 | null | null | jmurphy97/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-10T20:51:34 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 659.00 +/- 313.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmurphy97 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmurphy97 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jmurphy97
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,694 | [
[
-0.04095458984375,
-0.035400390625,
0.0214691162109375,
0.0246734619140625,
-0.0094451904296875,
-0.0165557861328125,
0.0128021240234375,
-0.0142364501953125,
0.01317596435546875,
0.0242156982421875,
-0.07135009765625,
-0.035003662109375,
-0.027435302734375,
... |
jprorama/distilbert-base-uncased-finetuned-emotion | 2023-04-11T10:47:16.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | jprorama | null | null | jprorama/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-10T21:58:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: train
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254084497083122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8639 | 1.0 | 250 | 0.3347 | 0.902 | 0.8993 |
| 0.2552 | 2.0 | 500 | 0.2237 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
| 1,837 | [
[
-0.03765869140625,
-0.04071044921875,
0.014190673828125,
0.022674560546875,
-0.0260162353515625,
-0.02001953125,
-0.01273345947265625,
-0.0089874267578125,
0.0098876953125,
0.008575439453125,
-0.056365966796875,
-0.051605224609375,
-0.059417724609375,
-0.007... |
zhangzeyu/CT-PubMedBERT-RE-fine-tuned-noentity | 2023-04-20T10:05:59.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"region:us"
] | text-classification | zhangzeyu | null | null | zhangzeyu/CT-PubMedBERT-RE-fine-tuned-noentity | 1 | 2 | transformers | 2023-04-11T02:10:29 | ---
license: mit
inference: false
---
| Code | Ralation name |
|------|----------------------------------------------------|
| 0 | not_a_relation |
| 1 | active_metabolites_of |
| 2 | anatomic_structure_has_location |
| 3 | anatomic_structure_is_physical_part_of |
| 4 | anatomy_originated_from_biological_process |
| 5 | associated_with_malfunction_of_gene_product |
| 6 | biological_process_has_associated_location |
| 7 | biological_process_has_initiator_chemical_or_drug |
| 8 | biological_process_has_initiator_process |
| 9 | biological_process_has_result_anatomy |
| 10 | biological_process_has_result_biological_process |
| 11 | biological_process_has_result_chemical_or_drug |
| 12 | biological_process_involves_gene_product |
| 13 | biological_process_is_part_of_process |
| 14 | biological_process_results_from_biological_process |
| 15 | biomarker_type_includes_gene_product |
| 16 | cdrh_parent_of |
| 17 | chemical_or_drug_affects_gene_product |
| 18 | chemical_or_drug_initiates_biological_process |
| 19 | chemical_or_drug_is_product_of_biological_process |
| 20 | chemical_structure_of |
| 21 | chemotherapy_regimen_has_component |
| 22 | completely_excised_anatomy_has_procedure |
| 23 | complex_has_physical_part |
| 24 | concept_in_subset |
| 25 | conceptual_part_of |
| 26 | contraindicated_with_disease |
| 27 | contraindicating_class_of |
| 28 | disease_excludes_normal_cell_origin |
| 29 | disease_excludes_primary_anatomic_site |
| 30 | disease_has_abnormal_cell |
| 31 | disease_has_associated_anatomic_site |
| 32 | disease_has_associated_disease |
| 33 | disease_has_associated_gene |
| 34 | disease_has_finding |
| 35 | disease_has_metastatic_anatomic_site |
| 36 | disease_has_normal_cell_origin |
| 37 | disease_has_normal_tissue_origin |
| 38 | disease_has_primary_anatomic_site |
| 39 | disease_may_have_associated_disease |
| 40 | disease_may_have_finding |
| 41 | excised_anatomy_has_procedure |
| 42 | gene_associated_with_disease |
| 43 | gene_encodes_gene_product |
| 44 | gene_found_in_organism |
| 45 | gene_mapped_to_disease |
| 46 | gene_plays_role_in_process |
| 47 | gene_product_affected_by_chemical_or_drug |
| 48 | gene_product_encoded_by_gene |
| 49 | gene_product_expressed_in_tissue |
| 50 | gene_product_has_associated_anatomy |
| 51 | gene_product_has_biochemical_function |
| 52 | gene_product_has_chemical_classification |
| 53 | gene_product_has_organism_source |
| 54 | gene_product_has_structural_domain_or_motif |
| 55 | gene_product_is_biomarker_of |
| 56 | gene_product_is_physical_part_of |
| 57 | gene_product_malfunction_associated_with_disease |
| 58 | gene_product_plays_role_in_biological_process |
| 59 | has_active_metabolites |
| 60 | has_cdrh_parent |
| 61 | has_chemical_structure |
| 62 | has_conceptual_part |
| 63 | has_contraindicated_drug |
| 64 | has_contraindicating_class |
| 65 | has_free_acid_or_base_form |
| 66 | has_ingredient |
| 67 | has_mechanism_of_action |
| 68 | has_nichd_parent |
| 69 | has_physical_part_of_anatomic_structure |
| 70 | has_physiologic_effect |
| 71 | has_salt_form |
| 72 | has_therapeutic_class |
| 73 | has_tradename |
| 74 | induced_by |
| 75 | induces |
| 76 | ingredient_of |
| 77 | is_abnormal_cell_of_disease |
| 78 | is_associated_anatomic_site_of |
| 79 | is_associated_anatomy_of_gene_product |
| 80 | is_associated_disease_of |
| 81 | is_biochemical_function_of_gene_product |
| 82 | is_chemical_classification_of_gene_product |
| 83 | is_component_of_chemotherapy_regimen |
| 84 | is_finding_of_disease |
| 85 | is_location_of_anatomic_structure |
| 86 | is_location_of_biological_process |
| 87 | is_marked_by_gene_product |
| 88 | is_metastatic_anatomic_site_of_disease |
| 89 | is_normal_cell_origin_of_disease |
| 90 | is_normal_tissue_origin_of_disease |
| 91 | is_not_normal_cell_origin_of_disease |
| 92 | is_not_primary_anatomic_site_of_disease |
| 93 | is_organism_source_of_gene_product |
| 94 | is_physiologic_effect_of_chemical_or_drug |
| 95 | is_primary_anatomic_site_of_disease |
| 96 | is_structural_domain_or_motif_of_gene_product |
| 97 | may_be_associated_disease_of_disease |
| 98 | may_be_diagnosed_by |
| 99 | may_be_finding_of_disease |
| 100 | may_be_prevented_by |
| 101 | may_be_treated_by |
| 102 | may_diagnose |
| 103 | may_prevent |
| 104 | may_treat |
| 105 | mechanism_of_action_of |
| 106 | nichd_parent_of |
| 107 | organism_has_gene |
| 108 | partially_excised_anatomy_has_procedure |
| 109 | pathogenesis_of_disease_involves_gene |
| 110 | physiologic_effect_of |
| 111 | procedure_has_completely_excised_anatomy |
| 112 | procedure_has_excised_anatomy |
| 113 | procedure_has_partially_excised_anatomy |
| 114 | procedure_has_target_anatomy |
| 115 | process_includes_biological_process |
| 116 | process_initiates_biological_process |
| 117 | process_involves_gene |
| 118 | product_component_of |
| 119 | special_category_includes_neoplasm |
| 120 | subset_includes_concept |
| 121 | target_anatomy_has_procedure |
| 122 | therapeutic_class_of |
| 123 | tissue_is_expression_site_of_gene_product |
| 124 | tradename_of |
| 7,912 | [
[
-0.029052734375,
-0.034332275390625,
0.01824951171875,
0.025115966796875,
-0.00799560546875,
0.0270843505859375,
0.016815185546875,
-0.0228118896484375,
0.0657958984375,
0.02484130859375,
-0.043365478515625,
-0.062408447265625,
-0.05755615234375,
0.028457641... |
zhangzeyu/CT-PubMedBERT-RE-fine-tuned-group | 2023-04-20T10:05:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"region:us"
] | text-classification | zhangzeyu | null | null | zhangzeyu/CT-PubMedBERT-RE-fine-tuned-group | 0 | 2 | transformers | 2023-04-11T02:24:18 | ---
license: mit
inference: false
---
| Code | Ralation name |
|------|----------------------------------------------------|
| 0 | not_a_relation |
| 1 | active_metabolites_of |
| 2 | anatomic_structure_has_location |
| 3 | anatomic_structure_is_physical_part_of |
| 4 | anatomy_originated_from_biological_process |
| 5 | associated_with_malfunction_of_gene_product |
| 6 | biological_process_has_associated_location |
| 7 | biological_process_has_initiator_chemical_or_drug |
| 8 | biological_process_has_initiator_process |
| 9 | biological_process_has_result_anatomy |
| 10 | biological_process_has_result_biological_process |
| 11 | biological_process_has_result_chemical_or_drug |
| 12 | biological_process_involves_gene_product |
| 13 | biological_process_is_part_of_process |
| 14 | biological_process_results_from_biological_process |
| 15 | biomarker_type_includes_gene_product |
| 16 | cdrh_parent_of |
| 17 | chemical_or_drug_affects_gene_product |
| 18 | chemical_or_drug_initiates_biological_process |
| 19 | chemical_or_drug_is_product_of_biological_process |
| 20 | chemical_structure_of |
| 21 | chemotherapy_regimen_has_component |
| 22 | completely_excised_anatomy_has_procedure |
| 23 | complex_has_physical_part |
| 24 | concept_in_subset |
| 25 | conceptual_part_of |
| 26 | contraindicated_with_disease |
| 27 | contraindicating_class_of |
| 28 | disease_excludes_normal_cell_origin |
| 29 | disease_excludes_primary_anatomic_site |
| 30 | disease_has_abnormal_cell |
| 31 | disease_has_associated_anatomic_site |
| 32 | disease_has_associated_disease |
| 33 | disease_has_associated_gene |
| 34 | disease_has_finding |
| 35 | disease_has_metastatic_anatomic_site |
| 36 | disease_has_normal_cell_origin |
| 37 | disease_has_normal_tissue_origin |
| 38 | disease_has_primary_anatomic_site |
| 39 | disease_may_have_associated_disease |
| 40 | disease_may_have_finding |
| 41 | excised_anatomy_has_procedure |
| 42 | gene_associated_with_disease |
| 43 | gene_encodes_gene_product |
| 44 | gene_found_in_organism |
| 45 | gene_mapped_to_disease |
| 46 | gene_plays_role_in_process |
| 47 | gene_product_affected_by_chemical_or_drug |
| 48 | gene_product_encoded_by_gene |
| 49 | gene_product_expressed_in_tissue |
| 50 | gene_product_has_associated_anatomy |
| 51 | gene_product_has_biochemical_function |
| 52 | gene_product_has_chemical_classification |
| 53 | gene_product_has_organism_source |
| 54 | gene_product_has_structural_domain_or_motif |
| 55 | gene_product_is_biomarker_of |
| 56 | gene_product_is_physical_part_of |
| 57 | gene_product_malfunction_associated_with_disease |
| 58 | gene_product_plays_role_in_biological_process |
| 59 | has_active_metabolites |
| 60 | has_cdrh_parent |
| 61 | has_chemical_structure |
| 62 | has_conceptual_part |
| 63 | has_contraindicated_drug |
| 64 | has_contraindicating_class |
| 65 | has_free_acid_or_base_form |
| 66 | has_ingredient |
| 67 | has_mechanism_of_action |
| 68 | has_nichd_parent |
| 69 | has_physical_part_of_anatomic_structure |
| 70 | has_physiologic_effect |
| 71 | has_salt_form |
| 72 | has_therapeutic_class |
| 73 | has_tradename |
| 74 | induced_by |
| 75 | induces |
| 76 | ingredient_of |
| 77 | is_abnormal_cell_of_disease |
| 78 | is_associated_anatomic_site_of |
| 79 | is_associated_anatomy_of_gene_product |
| 80 | is_associated_disease_of |
| 81 | is_biochemical_function_of_gene_product |
| 82 | is_chemical_classification_of_gene_product |
| 83 | is_component_of_chemotherapy_regimen |
| 84 | is_finding_of_disease |
| 85 | is_location_of_anatomic_structure |
| 86 | is_location_of_biological_process |
| 87 | is_marked_by_gene_product |
| 88 | is_metastatic_anatomic_site_of_disease |
| 89 | is_normal_cell_origin_of_disease |
| 90 | is_normal_tissue_origin_of_disease |
| 91 | is_not_normal_cell_origin_of_disease |
| 92 | is_not_primary_anatomic_site_of_disease |
| 93 | is_organism_source_of_gene_product |
| 94 | is_physiologic_effect_of_chemical_or_drug |
| 95 | is_primary_anatomic_site_of_disease |
| 96 | is_structural_domain_or_motif_of_gene_product |
| 97 | may_be_associated_disease_of_disease |
| 98 | may_be_diagnosed_by |
| 99 | may_be_finding_of_disease |
| 100 | may_be_prevented_by |
| 101 | may_be_treated_by |
| 102 | may_diagnose |
| 103 | may_prevent |
| 104 | may_treat |
| 105 | mechanism_of_action_of |
| 106 | nichd_parent_of |
| 107 | organism_has_gene |
| 108 | partially_excised_anatomy_has_procedure |
| 109 | pathogenesis_of_disease_involves_gene |
| 110 | physiologic_effect_of |
| 111 | procedure_has_completely_excised_anatomy |
| 112 | procedure_has_excised_anatomy |
| 113 | procedure_has_partially_excised_anatomy |
| 114 | procedure_has_target_anatomy |
| 115 | process_includes_biological_process |
| 116 | process_initiates_biological_process |
| 117 | process_involves_gene |
| 118 | product_component_of |
| 119 | special_category_includes_neoplasm |
| 120 | subset_includes_concept |
| 121 | target_anatomy_has_procedure |
| 122 | therapeutic_class_of |
| 123 | tissue_is_expression_site_of_gene_product |
| 124 | tradename_of |
| 7,912 | [
[
-0.0290374755859375,
-0.034332275390625,
0.01824951171875,
0.025115966796875,
-0.0079803466796875,
0.0270843505859375,
0.016815185546875,
-0.0228271484375,
0.0657958984375,
0.02484130859375,
-0.043365478515625,
-0.062408447265625,
-0.0574951171875,
0.0284423... |
zhangzeyu/CT-PubMedBERT-RE-fine-tuned-type | 2023-04-20T10:04:41.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"region:us"
] | text-classification | zhangzeyu | null | null | zhangzeyu/CT-PubMedBERT-RE-fine-tuned-type | 0 | 2 | transformers | 2023-04-11T02:28:46 | ---
license: mit
inference: false
---
| Code | Ralation name |
|------|----------------------------------------------------|
| 0 | not_a_relation |
| 1 | active_metabolites_of |
| 2 | anatomic_structure_has_location |
| 3 | anatomic_structure_is_physical_part_of |
| 4 | anatomy_originated_from_biological_process |
| 5 | associated_with_malfunction_of_gene_product |
| 6 | biological_process_has_associated_location |
| 7 | biological_process_has_initiator_chemical_or_drug |
| 8 | biological_process_has_initiator_process |
| 9 | biological_process_has_result_anatomy |
| 10 | biological_process_has_result_biological_process |
| 11 | biological_process_has_result_chemical_or_drug |
| 12 | biological_process_involves_gene_product |
| 13 | biological_process_is_part_of_process |
| 14 | biological_process_results_from_biological_process |
| 15 | biomarker_type_includes_gene_product |
| 16 | cdrh_parent_of |
| 17 | chemical_or_drug_affects_gene_product |
| 18 | chemical_or_drug_initiates_biological_process |
| 19 | chemical_or_drug_is_product_of_biological_process |
| 20 | chemical_structure_of |
| 21 | chemotherapy_regimen_has_component |
| 22 | completely_excised_anatomy_has_procedure |
| 23 | complex_has_physical_part |
| 24 | concept_in_subset |
| 25 | conceptual_part_of |
| 26 | contraindicated_with_disease |
| 27 | contraindicating_class_of |
| 28 | disease_excludes_normal_cell_origin |
| 29 | disease_excludes_primary_anatomic_site |
| 30 | disease_has_abnormal_cell |
| 31 | disease_has_associated_anatomic_site |
| 32 | disease_has_associated_disease |
| 33 | disease_has_associated_gene |
| 34 | disease_has_finding |
| 35 | disease_has_metastatic_anatomic_site |
| 36 | disease_has_normal_cell_origin |
| 37 | disease_has_normal_tissue_origin |
| 38 | disease_has_primary_anatomic_site |
| 39 | disease_may_have_associated_disease |
| 40 | disease_may_have_finding |
| 41 | excised_anatomy_has_procedure |
| 42 | gene_associated_with_disease |
| 43 | gene_encodes_gene_product |
| 44 | gene_found_in_organism |
| 45 | gene_mapped_to_disease |
| 46 | gene_plays_role_in_process |
| 47 | gene_product_affected_by_chemical_or_drug |
| 48 | gene_product_encoded_by_gene |
| 49 | gene_product_expressed_in_tissue |
| 50 | gene_product_has_associated_anatomy |
| 51 | gene_product_has_biochemical_function |
| 52 | gene_product_has_chemical_classification |
| 53 | gene_product_has_organism_source |
| 54 | gene_product_has_structural_domain_or_motif |
| 55 | gene_product_is_biomarker_of |
| 56 | gene_product_is_physical_part_of |
| 57 | gene_product_malfunction_associated_with_disease |
| 58 | gene_product_plays_role_in_biological_process |
| 59 | has_active_metabolites |
| 60 | has_cdrh_parent |
| 61 | has_chemical_structure |
| 62 | has_conceptual_part |
| 63 | has_contraindicated_drug |
| 64 | has_contraindicating_class |
| 65 | has_free_acid_or_base_form |
| 66 | has_ingredient |
| 67 | has_mechanism_of_action |
| 68 | has_nichd_parent |
| 69 | has_physical_part_of_anatomic_structure |
| 70 | has_physiologic_effect |
| 71 | has_salt_form |
| 72 | has_therapeutic_class |
| 73 | has_tradename |
| 74 | induced_by |
| 75 | induces |
| 76 | ingredient_of |
| 77 | is_abnormal_cell_of_disease |
| 78 | is_associated_anatomic_site_of |
| 79 | is_associated_anatomy_of_gene_product |
| 80 | is_associated_disease_of |
| 81 | is_biochemical_function_of_gene_product |
| 82 | is_chemical_classification_of_gene_product |
| 83 | is_component_of_chemotherapy_regimen |
| 84 | is_finding_of_disease |
| 85 | is_location_of_anatomic_structure |
| 86 | is_location_of_biological_process |
| 87 | is_marked_by_gene_product |
| 88 | is_metastatic_anatomic_site_of_disease |
| 89 | is_normal_cell_origin_of_disease |
| 90 | is_normal_tissue_origin_of_disease |
| 91 | is_not_normal_cell_origin_of_disease |
| 92 | is_not_primary_anatomic_site_of_disease |
| 93 | is_organism_source_of_gene_product |
| 94 | is_physiologic_effect_of_chemical_or_drug |
| 95 | is_primary_anatomic_site_of_disease |
| 96 | is_structural_domain_or_motif_of_gene_product |
| 97 | may_be_associated_disease_of_disease |
| 98 | may_be_diagnosed_by |
| 99 | may_be_finding_of_disease |
| 100 | may_be_prevented_by |
| 101 | may_be_treated_by |
| 102 | may_diagnose |
| 103 | may_prevent |
| 104 | may_treat |
| 105 | mechanism_of_action_of |
| 106 | nichd_parent_of |
| 107 | organism_has_gene |
| 108 | partially_excised_anatomy_has_procedure |
| 109 | pathogenesis_of_disease_involves_gene |
| 110 | physiologic_effect_of |
| 111 | procedure_has_completely_excised_anatomy |
| 112 | procedure_has_excised_anatomy |
| 113 | procedure_has_partially_excised_anatomy |
| 114 | procedure_has_target_anatomy |
| 115 | process_includes_biological_process |
| 116 | process_initiates_biological_process |
| 117 | process_involves_gene |
| 118 | product_component_of |
| 119 | special_category_includes_neoplasm |
| 120 | subset_includes_concept |
| 121 | target_anatomy_has_procedure |
| 122 | therapeutic_class_of |
| 123 | tissue_is_expression_site_of_gene_product |
| 124 | tradename_of |
| 7,912 | [
[
-0.0290374755859375,
-0.034332275390625,
0.0182647705078125,
0.02508544921875,
-0.00800323486328125,
0.0270843505859375,
0.016815185546875,
-0.0228271484375,
0.0657958984375,
0.02484130859375,
-0.043365478515625,
-0.062408447265625,
-0.05755615234375,
0.0284... |
Muhsabrys/autotrain-mynaguib-48414117632 | 2023-04-11T02:44:27.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"ar",
"dataset:Muhsabrys/autotrain-data-mynaguib",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-mynaguib-48414117632 | 0 | 2 | transformers | 2023-04-11T02:43:16 | ---
tags:
- autotrain
- text-classification
language:
- ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-mynaguib
co2_eq_emissions:
emissions: 0.510452418180777
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 48414117632
- CO2 Emissions (in grams): 0.5105
## Validation Metrics
- Loss: 0.004
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-mynaguib-48414117632
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-mynaguib-48414117632", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-mynaguib-48414117632", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,133 | [
[
-0.031982421875,
-0.025360107421875,
0.0148773193359375,
0.01419830322265625,
0.0007548332214355469,
0.0003619194030761719,
0.008880615234375,
-0.0111236572265625,
0.0080718994140625,
0.01326751708984375,
-0.062469482421875,
-0.0328369140625,
-0.05645751953125,
... |
zhangzeyu/CT-PubMedBERT-RE-fine-tuned-groupabb | 2023-04-20T10:05:21.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"region:us"
] | text-classification | zhangzeyu | null | null | zhangzeyu/CT-PubMedBERT-RE-fine-tuned-groupabb | 0 | 2 | transformers | 2023-04-11T02:55:26 | ---
license: mit
inference: false
---
| Code | Ralation name |
|------|----------------------------------------------------|
| 0 | not_a_relation |
| 1 | active_metabolites_of |
| 2 | anatomic_structure_has_location |
| 3 | anatomic_structure_is_physical_part_of |
| 4 | anatomy_originated_from_biological_process |
| 5 | associated_with_malfunction_of_gene_product |
| 6 | biological_process_has_associated_location |
| 7 | biological_process_has_initiator_chemical_or_drug |
| 8 | biological_process_has_initiator_process |
| 9 | biological_process_has_result_anatomy |
| 10 | biological_process_has_result_biological_process |
| 11 | biological_process_has_result_chemical_or_drug |
| 12 | biological_process_involves_gene_product |
| 13 | biological_process_is_part_of_process |
| 14 | biological_process_results_from_biological_process |
| 15 | biomarker_type_includes_gene_product |
| 16 | cdrh_parent_of |
| 17 | chemical_or_drug_affects_gene_product |
| 18 | chemical_or_drug_initiates_biological_process |
| 19 | chemical_or_drug_is_product_of_biological_process |
| 20 | chemical_structure_of |
| 21 | chemotherapy_regimen_has_component |
| 22 | completely_excised_anatomy_has_procedure |
| 23 | complex_has_physical_part |
| 24 | concept_in_subset |
| 25 | conceptual_part_of |
| 26 | contraindicated_with_disease |
| 27 | contraindicating_class_of |
| 28 | disease_excludes_normal_cell_origin |
| 29 | disease_excludes_primary_anatomic_site |
| 30 | disease_has_abnormal_cell |
| 31 | disease_has_associated_anatomic_site |
| 32 | disease_has_associated_disease |
| 33 | disease_has_associated_gene |
| 34 | disease_has_finding |
| 35 | disease_has_metastatic_anatomic_site |
| 36 | disease_has_normal_cell_origin |
| 37 | disease_has_normal_tissue_origin |
| 38 | disease_has_primary_anatomic_site |
| 39 | disease_may_have_associated_disease |
| 40 | disease_may_have_finding |
| 41 | excised_anatomy_has_procedure |
| 42 | gene_associated_with_disease |
| 43 | gene_encodes_gene_product |
| 44 | gene_found_in_organism |
| 45 | gene_mapped_to_disease |
| 46 | gene_plays_role_in_process |
| 47 | gene_product_affected_by_chemical_or_drug |
| 48 | gene_product_encoded_by_gene |
| 49 | gene_product_expressed_in_tissue |
| 50 | gene_product_has_associated_anatomy |
| 51 | gene_product_has_biochemical_function |
| 52 | gene_product_has_chemical_classification |
| 53 | gene_product_has_organism_source |
| 54 | gene_product_has_structural_domain_or_motif |
| 55 | gene_product_is_biomarker_of |
| 56 | gene_product_is_physical_part_of |
| 57 | gene_product_malfunction_associated_with_disease |
| 58 | gene_product_plays_role_in_biological_process |
| 59 | has_active_metabolites |
| 60 | has_cdrh_parent |
| 61 | has_chemical_structure |
| 62 | has_conceptual_part |
| 63 | has_contraindicated_drug |
| 64 | has_contraindicating_class |
| 65 | has_free_acid_or_base_form |
| 66 | has_ingredient |
| 67 | has_mechanism_of_action |
| 68 | has_nichd_parent |
| 69 | has_physical_part_of_anatomic_structure |
| 70 | has_physiologic_effect |
| 71 | has_salt_form |
| 72 | has_therapeutic_class |
| 73 | has_tradename |
| 74 | induced_by |
| 75 | induces |
| 76 | ingredient_of |
| 77 | is_abnormal_cell_of_disease |
| 78 | is_associated_anatomic_site_of |
| 79 | is_associated_anatomy_of_gene_product |
| 80 | is_associated_disease_of |
| 81 | is_biochemical_function_of_gene_product |
| 82 | is_chemical_classification_of_gene_product |
| 83 | is_component_of_chemotherapy_regimen |
| 84 | is_finding_of_disease |
| 85 | is_location_of_anatomic_structure |
| 86 | is_location_of_biological_process |
| 87 | is_marked_by_gene_product |
| 88 | is_metastatic_anatomic_site_of_disease |
| 89 | is_normal_cell_origin_of_disease |
| 90 | is_normal_tissue_origin_of_disease |
| 91 | is_not_normal_cell_origin_of_disease |
| 92 | is_not_primary_anatomic_site_of_disease |
| 93 | is_organism_source_of_gene_product |
| 94 | is_physiologic_effect_of_chemical_or_drug |
| 95 | is_primary_anatomic_site_of_disease |
| 96 | is_structural_domain_or_motif_of_gene_product |
| 97 | may_be_associated_disease_of_disease |
| 98 | may_be_diagnosed_by |
| 99 | may_be_finding_of_disease |
| 100 | may_be_prevented_by |
| 101 | may_be_treated_by |
| 102 | may_diagnose |
| 103 | may_prevent |
| 104 | may_treat |
| 105 | mechanism_of_action_of |
| 106 | nichd_parent_of |
| 107 | organism_has_gene |
| 108 | partially_excised_anatomy_has_procedure |
| 109 | pathogenesis_of_disease_involves_gene |
| 110 | physiologic_effect_of |
| 111 | procedure_has_completely_excised_anatomy |
| 112 | procedure_has_excised_anatomy |
| 113 | procedure_has_partially_excised_anatomy |
| 114 | procedure_has_target_anatomy |
| 115 | process_includes_biological_process |
| 116 | process_initiates_biological_process |
| 117 | process_involves_gene |
| 118 | product_component_of |
| 119 | special_category_includes_neoplasm |
| 120 | subset_includes_concept |
| 121 | target_anatomy_has_procedure |
| 122 | therapeutic_class_of |
| 123 | tissue_is_expression_site_of_gene_product |
| 124 | tradename_of |
| 7,912 | [
[
-0.0290374755859375,
-0.034332275390625,
0.0182647705078125,
0.02508544921875,
-0.00800323486328125,
0.0270843505859375,
0.016815185546875,
-0.0228271484375,
0.0657958984375,
0.02484130859375,
-0.043365478515625,
-0.062408447265625,
-0.05755615234375,
0.0284... |
zhangzeyu/CT-PubMedBERT-RE-fine-tuned-typecode | 2023-04-20T10:04:02.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"region:us"
] | text-classification | zhangzeyu | null | null | zhangzeyu/CT-PubMedBERT-RE-fine-tuned-typecode | 0 | 2 | transformers | 2023-04-11T02:57:45 | ---
license: mit
inference: false
---
| Code | Ralation name |
|------|----------------------------------------------------|
| 0 | not_a_relation |
| 1 | active_metabolites_of |
| 2 | anatomic_structure_has_location |
| 3 | anatomic_structure_is_physical_part_of |
| 4 | anatomy_originated_from_biological_process |
| 5 | associated_with_malfunction_of_gene_product |
| 6 | biological_process_has_associated_location |
| 7 | biological_process_has_initiator_chemical_or_drug |
| 8 | biological_process_has_initiator_process |
| 9 | biological_process_has_result_anatomy |
| 10 | biological_process_has_result_biological_process |
| 11 | biological_process_has_result_chemical_or_drug |
| 12 | biological_process_involves_gene_product |
| 13 | biological_process_is_part_of_process |
| 14 | biological_process_results_from_biological_process |
| 15 | biomarker_type_includes_gene_product |
| 16 | cdrh_parent_of |
| 17 | chemical_or_drug_affects_gene_product |
| 18 | chemical_or_drug_initiates_biological_process |
| 19 | chemical_or_drug_is_product_of_biological_process |
| 20 | chemical_structure_of |
| 21 | chemotherapy_regimen_has_component |
| 22 | completely_excised_anatomy_has_procedure |
| 23 | complex_has_physical_part |
| 24 | concept_in_subset |
| 25 | conceptual_part_of |
| 26 | contraindicated_with_disease |
| 27 | contraindicating_class_of |
| 28 | disease_excludes_normal_cell_origin |
| 29 | disease_excludes_primary_anatomic_site |
| 30 | disease_has_abnormal_cell |
| 31 | disease_has_associated_anatomic_site |
| 32 | disease_has_associated_disease |
| 33 | disease_has_associated_gene |
| 34 | disease_has_finding |
| 35 | disease_has_metastatic_anatomic_site |
| 36 | disease_has_normal_cell_origin |
| 37 | disease_has_normal_tissue_origin |
| 38 | disease_has_primary_anatomic_site |
| 39 | disease_may_have_associated_disease |
| 40 | disease_may_have_finding |
| 41 | excised_anatomy_has_procedure |
| 42 | gene_associated_with_disease |
| 43 | gene_encodes_gene_product |
| 44 | gene_found_in_organism |
| 45 | gene_mapped_to_disease |
| 46 | gene_plays_role_in_process |
| 47 | gene_product_affected_by_chemical_or_drug |
| 48 | gene_product_encoded_by_gene |
| 49 | gene_product_expressed_in_tissue |
| 50 | gene_product_has_associated_anatomy |
| 51 | gene_product_has_biochemical_function |
| 52 | gene_product_has_chemical_classification |
| 53 | gene_product_has_organism_source |
| 54 | gene_product_has_structural_domain_or_motif |
| 55 | gene_product_is_biomarker_of |
| 56 | gene_product_is_physical_part_of |
| 57 | gene_product_malfunction_associated_with_disease |
| 58 | gene_product_plays_role_in_biological_process |
| 59 | has_active_metabolites |
| 60 | has_cdrh_parent |
| 61 | has_chemical_structure |
| 62 | has_conceptual_part |
| 63 | has_contraindicated_drug |
| 64 | has_contraindicating_class |
| 65 | has_free_acid_or_base_form |
| 66 | has_ingredient |
| 67 | has_mechanism_of_action |
| 68 | has_nichd_parent |
| 69 | has_physical_part_of_anatomic_structure |
| 70 | has_physiologic_effect |
| 71 | has_salt_form |
| 72 | has_therapeutic_class |
| 73 | has_tradename |
| 74 | induced_by |
| 75 | induces |
| 76 | ingredient_of |
| 77 | is_abnormal_cell_of_disease |
| 78 | is_associated_anatomic_site_of |
| 79 | is_associated_anatomy_of_gene_product |
| 80 | is_associated_disease_of |
| 81 | is_biochemical_function_of_gene_product |
| 82 | is_chemical_classification_of_gene_product |
| 83 | is_component_of_chemotherapy_regimen |
| 84 | is_finding_of_disease |
| 85 | is_location_of_anatomic_structure |
| 86 | is_location_of_biological_process |
| 87 | is_marked_by_gene_product |
| 88 | is_metastatic_anatomic_site_of_disease |
| 89 | is_normal_cell_origin_of_disease |
| 90 | is_normal_tissue_origin_of_disease |
| 91 | is_not_normal_cell_origin_of_disease |
| 92 | is_not_primary_anatomic_site_of_disease |
| 93 | is_organism_source_of_gene_product |
| 94 | is_physiologic_effect_of_chemical_or_drug |
| 95 | is_primary_anatomic_site_of_disease |
| 96 | is_structural_domain_or_motif_of_gene_product |
| 97 | may_be_associated_disease_of_disease |
| 98 | may_be_diagnosed_by |
| 99 | may_be_finding_of_disease |
| 100 | may_be_prevented_by |
| 101 | may_be_treated_by |
| 102 | may_diagnose |
| 103 | may_prevent |
| 104 | may_treat |
| 105 | mechanism_of_action_of |
| 106 | nichd_parent_of |
| 107 | organism_has_gene |
| 108 | partially_excised_anatomy_has_procedure |
| 109 | pathogenesis_of_disease_involves_gene |
| 110 | physiologic_effect_of |
| 111 | procedure_has_completely_excised_anatomy |
| 112 | procedure_has_excised_anatomy |
| 113 | procedure_has_partially_excised_anatomy |
| 114 | procedure_has_target_anatomy |
| 115 | process_includes_biological_process |
| 116 | process_initiates_biological_process |
| 117 | process_involves_gene |
| 118 | product_component_of |
| 119 | special_category_includes_neoplasm |
| 120 | subset_includes_concept |
| 121 | target_anatomy_has_procedure |
| 122 | therapeutic_class_of |
| 123 | tissue_is_expression_site_of_gene_product |
| 124 | tradename_of |
| 7,912 | [
[
-0.0290374755859375,
-0.034332275390625,
0.0182647705078125,
0.02508544921875,
-0.00800323486328125,
0.0270843505859375,
0.016815185546875,
-0.0228271484375,
0.0657958984375,
0.02484130859375,
-0.043365478515625,
-0.062408447265625,
-0.05755615234375,
0.0284... |
dingzhaohan/distilbert-base-uncased-finetuned-cola | 2023-04-13T08:39:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | dingzhaohan | null | null | dingzhaohan/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-11T05:47:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.10315004767907714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6976
- Matthews Correlation: 0.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6135 | 1.0 | 535 | 0.6257 | 0.0 |
| 0.6078 | 2.0 | 1070 | 0.6187 | 0.0 |
| 0.6038 | 3.0 | 1605 | 0.6179 | -0.0041 |
| 0.5649 | 4.0 | 2140 | 0.6509 | 0.1006 |
| 0.5093 | 5.0 | 2675 | 0.6976 | 0.1032 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,049 | [
[
-0.0236358642578125,
-0.04949951171875,
0.00836944580078125,
0.0197601318359375,
-0.0189361572265625,
-0.00893402099609375,
-0.005962371826171875,
-0.00368499755859375,
0.0227508544921875,
0.01006317138671875,
-0.0452880859375,
-0.0374755859375,
-0.0631713867187... |
hoang14/pegasus-finetuned-samsum | 2023-04-11T10:20:05.000Z | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | hoang14 | null | null | hoang14/pegasus-finetuned-samsum | 0 | 2 | transformers | 2023-04-11T09:39:45 | ---
license: apache-2.0
language:
- en
metrics:
- rouge
datasets:
- samsum
pipeline_tag: text2text-generation
---
Summarization model based on pegasus, finetuned on samsum dataset
source code: https://colab.research.google.com/drive/1FxdOV1fiHY3JC6dFw5T-NED1J8dKKHSO#scrollTo=pgdQ2up7vJoU
metrics on samsum dataset:
- rouge1: 0.436239
- rouge2: 0.209266
- rougeL: 0.34446
- rougeLsum: 0.344428 | 400 | [
[
-0.021728515625,
-0.03564453125,
0.02001953125,
0.02081298828125,
-0.040924072265625,
-0.03125,
0.026123046875,
0.0024967193603515625,
0.08050537109375,
0.059173583984375,
-0.05419921875,
-0.03662109375,
-0.052642822265625,
-0.01617431640625,
-0.03527832... |
cybersyn/robertuito-homomex-track1 | 2023-04-24T15:58:22.000Z | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | cybersyn | null | null | cybersyn/robertuito-homomex-track1 | 0 | 2 | transformers | 2023-04-11T10:50:10 | ---
tags:
- generated_from_keras_callback
model-index:
- name: robertuito-homomex-track1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robertuito-homomex-track1
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,467 | [
[
-0.037811279296875,
-0.037841796875,
0.0304107666015625,
0.012420654296875,
-0.039276123046875,
-0.009033203125,
-0.018157958984375,
-0.0201263427734375,
0.0218353271484375,
0.006805419921875,
-0.0638427734375,
-0.0543212890625,
-0.059417724609375,
-0.012481... |
Augcos/ML-Agents-Pyramids | 2023-04-11T10:50:30.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Augcos | null | null | Augcos/ML-Agents-Pyramids | 0 | 2 | ml-agents | 2023-04-11T10:50:25 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Augcos/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 955 | [
[
-0.0284576416015625,
-0.02001953125,
-0.0003368854522705078,
0.0272979736328125,
-0.00910186767578125,
0.006649017333984375,
0.027984619140625,
-0.00414276123046875,
0.035736083984375,
0.0362548828125,
-0.036376953125,
-0.051116943359375,
-0.036041259765625,
... |
szilard/bert-base-banking77-pt2 | 2023-04-11T14:32:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | szilard | null | null | szilard/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-04-11T13:03:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9293371477596352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3046
- F1: 0.9293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2034 | 1.0 | 626 | 0.8513 | 0.8310 |
| 0.4223 | 2.0 | 1252 | 0.3760 | 0.9150 |
| 0.2017 | 3.0 | 1878 | 0.3046 | 0.9293 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,728 | [
[
-0.0289764404296875,
-0.03955078125,
0.010498046875,
0.013458251953125,
-0.04443359375,
-0.0262298583984375,
-0.00881195068359375,
-0.01837158203125,
-0.0038127899169921875,
0.04083251953125,
-0.0445556640625,
-0.043548583984375,
-0.052825927734375,
-0.02839... |
seanghay/whisper-small-khmer | 2023-04-19T02:53:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"onnx",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"km",
"dataset:openslr",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | seanghay | null | null | seanghay/whisper-small-khmer | 1 | 2 | transformers | 2023-04-11T13:43:27 | ---
language:
- km
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- openslr
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Khmer Spaced - Seanghay Yath
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Google FLEURS
type: google/fleurs
config: km_kh
split: all
metrics:
- name: Wer
type: wer
value: 0.6464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-khmer
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
- Wer: 0.6464
## Model description
This model is fine-tuned with Google FLEURS & OpenSLR (SLR42) dataset.
- [ggml-model.bin](https://huggingface.co/seanghay/whisper-small-khmer/blob/main/ggml-model.bin)
- [model.onnx](https://huggingface.co/seanghay/whisper-small-khmer/blob/main/model.onnx)
```python
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model="seanghay/whisper-small-khmer",
)
result = pipe("audio.wav",
generate_kwargs={
"language":"<|km|>",
"task":"transcribe"},
batch_size=16
)
print(result["text"])
```
## whisper.cpp
### 1. Transcode the input audio to 16kHz PCM
```shell
ffmpeg -i audio.ogg -ar 16000 -ac 1 -c:a pcm_s16le output.wav
```
### 2. Transcribe with whisper.cpp
```shell
./main -m ggml-model.bin -f output.wav --print-colors --language km
```
## Training and evaluation data
- `training` = google/fleurs['train+validation'] + openslr['train']
- `eval` = google/fleurs['test']
## Training procedure
This model was trained based on the project on [GitHub](https://github.com/seanghay/whisper-tiny-khmer) with an NVIDIA A10 24GB.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2065 | 3.37 | 1000 | 0.3403 | 0.7929 |
| 0.0446 | 6.73 | 2000 | 0.2911 | 0.6961 |
| 0.008 | 10.1 | 3000 | 0.3578 | 0.6627 |
| 0.003 | 13.47 | 4000 | 0.3982 | 0.6564 |
| 0.0012 | 16.84 | 5000 | 0.4287 | 0.6512 |
| 0.0004 | 20.2 | 6000 | 0.4499 | 0.6419 |
| 0.0001 | 23.57 | 7000 | 0.4614 | 0.6469 |
| 0.0001 | 26.94 | 8000 | 0.4657 | 0.6464 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.1.dev0
- Tokenizers 0.13.3
| 3,122 | [
[
-0.034698486328125,
-0.046630859375,
0.0125579833984375,
0.0016651153564453125,
-0.0210723876953125,
-0.02142333984375,
-0.0197601318359375,
-0.031768798828125,
0.01035308837890625,
0.020111083984375,
-0.04534912109375,
-0.047210693359375,
-0.053375244140625,
... |
ku-accms/bert-base-japanese-ssuw | 2023-04-12T04:40:42.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | ku-accms | null | null | ku-accms/bert-base-japanese-ssuw | 1 | 2 | transformers | 2023-04-11T13:57:30 | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- bert
- fill-mask
datasets:
- wikipedia
mask_token: "[MASK]"
widget:
- text: "京都 大学 で [MASK] を 専攻 する 。"
- text: "東京 は 日本 の [MASK] だ 。"
- text: "カフェ で [MASK] を 注文 する 。"
---
# ku-accms/bert-base-japanese-ssuw
## Model description
This is a pre-trained Japanese BERT base model for super short unit words (SSUW).
## Pre-processing
The input text should be converted to full-width (zenkaku) characters and segmented into super short unit words in advance (e.g., by KyTea).
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ku-accms/bert-base-japanese-ssuw')
>>> unmasker("京都 大学 で [MASK] を 専攻 する 。")
[{'sequence': '京都 大学 で 文学 を 専攻 する 。',
'score': '0.1464807540178299',
'token': '14603',
'token_str': '文学'}
{'sequence': '京都 大学 で 哲学 を 専攻 する 。',
'score': '0.08064978569746017',
'token': '15917',
'token_str': '哲学'}
{'sequence': '京都 大学 で 演劇 を 専攻 する 。',
'score': '0.0800977498292923',
'token': '16772',
'token_str': '演劇'}
{'sequence': '京都 大学 で 法学 を 専攻 する 。',
'score': '0.04579947143793106',
'token': '16255',
'token_str': '法学'}
{'sequence': '京都 大学 で 英語 を 専攻 する 。',
'score': '0.045536939054727554',
'token': '14592',
'token_str': '英語'}
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
import zenhan
import Mykytea
kytea_model_path = "somewhere"
kytea = Mykytea.Mykytea("-model {} -notags".format(kytea_model_path))
def preprocess(text):
return " ".join(kytea.getWS(zenhan.h2z(text)))
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('ku-accms/bert-base-japanese-ssuw')
model = BertModel.from_pretrained("ku-accms/bert-base-japanese-ssuw")
text = "京都大学で自然言語処理を専攻する。"
encoded_input = tokenizer(preprocess(text), return_tensors='pt')
output = model(**encoded_input)
```
## Training data
We used a Japanese Wikipedia dump (as of 20230101, 3.3GB).
## Training procedure
We first segmented the texts into words by KyTea and then tokenized the words into subwords using WordPiece with a vocabulary size of 32,000. We pre-trained the BERT model using [transformers](https://github.com/huggingface/transformers) library. The training took about 8 days using 4 NVIDIA A100-SXM4-80GB GPUs.
The following hyperparameters were used for the pre-training.
- learning_rate: 2e-4
- weight decay: 1e-2
- per_device_train_batch_size: 80
- num_devices: 4
- gradient_accumulation_steps: 3
- total_train_batch_size: 960
- max_seq_length: 512
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 500,000
- warmup_steps: 10,000 | 2,819 | [
[
-0.0291748046875,
-0.06890869140625,
0.0295257568359375,
0.004558563232421875,
-0.053192138671875,
-0.007541656494140625,
-0.03759765625,
-0.019012451171875,
0.0268707275390625,
0.02728271484375,
-0.044677734375,
-0.041778564453125,
-0.0421142578125,
0.00084... |
amalik27/bert_ai | 2023-04-11T21:05:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | amalik27 | null | null | amalik27/bert_ai | 0 | 2 | transformers | 2023-04-11T14:58:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_ai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_ai
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0761
- Accuracy: 0.9913
- F1: 0.9913
- Precision: 0.9833
- Recall: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0358 | 1.0 | 6059 | 0.0390 | 0.9923 | 0.9923 | 0.9859 | 0.9989 |
| 0.0187 | 2.0 | 12118 | 0.0738 | 0.9884 | 0.9884 | 0.9779 | 0.9993 |
| 0.0056 | 3.0 | 18177 | 0.0761 | 0.9913 | 0.9913 | 0.9833 | 0.9995 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,659 | [
[
-0.034912109375,
-0.044281005859375,
0.01508331298828125,
0.01024627685546875,
-0.02252197265625,
-0.031982421875,
-0.01230621337890625,
-0.0229339599609375,
0.01438140869140625,
0.0185394287109375,
-0.053619384765625,
-0.043243408203125,
-0.04681396484375,
... |
amalik27/bert_human | 2023-04-12T00:30:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | amalik27 | null | null | amalik27/bert_human | 0 | 2 | transformers | 2023-04-11T15:04:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_human
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_human
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0451
- Accuracy: 0.9930
- F1: 0.9930
- Precision: 0.9923
- Recall: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.062 | 1.0 | 5488 | 0.0409 | 0.9914 | 0.9914 | 0.9924 | 0.9885 |
| 0.0279 | 2.0 | 10976 | 0.0414 | 0.9925 | 0.9925 | 0.9923 | 0.9909 |
| 0.008 | 3.0 | 16464 | 0.0451 | 0.9930 | 0.9930 | 0.9923 | 0.9921 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,665 | [
[
-0.036376953125,
-0.037445068359375,
0.01140594482421875,
0.0093841552734375,
-0.0201416015625,
-0.024200439453125,
-0.01540374755859375,
-0.024871826171875,
0.01495361328125,
0.0218048095703125,
-0.053070068359375,
-0.04541015625,
-0.04241943359375,
-0.0130... |
abulatk1n/distilbert-base-uncased-finetuned-emotion | 2023-04-11T18:26:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | abulatk1n | null | null | abulatk1n/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-11T18:07:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9233783185589441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8018 | 1.0 | 250 | 0.3189 | 0.9025 | 0.8981 |
| 0.2488 | 2.0 | 500 | 0.2220 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03814697265625,
-0.041351318359375,
0.0144805908203125,
0.022003173828125,
-0.02557373046875,
-0.01898193359375,
-0.01306915283203125,
-0.00859832763671875,
0.0105133056640625,
0.00833892822265625,
-0.056640625,
-0.05194091796875,
-0.060302734375,
-0.0086... |
amalik27/bert_combo | 2023-04-11T22:49:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | amalik27 | null | null | amalik27/bert_combo | 0 | 2 | transformers | 2023-04-11T21:35:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_combo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_combo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0881
- Accuracy: 0.9862
- F1: 0.9862
- Precision: 0.9788
- Recall: 0.9940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0848 | 1.0 | 6059 | 0.0705 | 0.9834 | 0.9834 | 0.9903 | 0.9766 |
| 0.0363 | 2.0 | 12118 | 0.0925 | 0.9821 | 0.9821 | 0.9701 | 0.9950 |
| 0.0118 | 3.0 | 18177 | 0.0881 | 0.9862 | 0.9862 | 0.9788 | 0.9940 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,665 | [
[
-0.038360595703125,
-0.032806396484375,
0.0114593505859375,
0.00930023193359375,
-0.027923583984375,
-0.0213165283203125,
-0.0106201171875,
-0.018768310546875,
0.02252197265625,
0.0252685546875,
-0.051483154296875,
-0.03875732421875,
-0.047271728515625,
-0.0... |
ValenHumano/roberta-base-bne-detector-de-stress-detector-de-stress | 2023-04-11T21:48:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ValenHumano | null | null | ValenHumano/roberta-base-bne-detector-de-stress-detector-de-stress | 0 | 2 | transformers | 2023-04-11T21:36:49 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-detector-de-stress-detector-de-stress
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-detector-de-stress-detector-de-stress
This model is a fine-tuned version of [ValenHumano/roberta-base-bne-detector-de-stress](https://huggingface.co/ValenHumano/roberta-base-bne-detector-de-stress) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4838
- Accuracy: 0.7571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4735 | 1.0 | 169 | 0.3888 | 0.8143 |
| 0.2484 | 2.0 | 338 | 0.4838 | 0.7571 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,522 | [
[
-0.03997802734375,
-0.04522705078125,
0.0185546875,
0.00441741943359375,
-0.031951904296875,
-0.044708251953125,
-0.005031585693359375,
-0.027618408203125,
0.003734588623046875,
0.0198822021484375,
-0.04638671875,
-0.050811767578125,
-0.056915283203125,
-0.0... |
rlucasz93/ppo-Pyramid | 2023-04-11T22:15:13.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | rlucasz93 | null | null | rlucasz93/ppo-Pyramid | 0 | 2 | ml-agents | 2023-04-11T22:04:38 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: rlucasz93/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 951 | [
[
-0.02685546875,
-0.0195770263671875,
0.00006020069122314453,
0.0263519287109375,
-0.0104217529296875,
0.005649566650390625,
0.027984619140625,
-0.0038394927978515625,
0.0347900390625,
0.03631591796875,
-0.03607177734375,
-0.051971435546875,
-0.03631591796875,
... |
feng5520/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-12T00:58:03.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | feng5520 | null | null | feng5520/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-12T00:57:31 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 15.50 +/- 12.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga feng5520 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga feng5520 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga feng5520
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,687 | [
[
-0.04107666015625,
-0.036712646484375,
0.022491455078125,
0.0255126953125,
-0.0098724365234375,
-0.01971435546875,
0.0117340087890625,
-0.0134735107421875,
0.01285552978515625,
0.0250396728515625,
-0.06927490234375,
-0.03631591796875,
-0.0272064208984375,
-0... |
willmendoza/platzi-distilroberta-base-mrpc-glue-will-mendoza | 2023-04-12T01:30:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | willmendoza | null | null | willmendoza/platzi-distilroberta-base-mrpc-glue-will-mendoza | 0 | 2 | transformers | 2023-04-12T01:18:35 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-will-mendoza
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8773234200743494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-will-mendoza
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5374
- Accuracy: 0.8382
- F1: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5458 | 1.09 | 500 | 0.5644 | 0.8309 | 0.8832 |
| 0.3627 | 2.18 | 1000 | 0.5374 | 0.8382 | 0.8773 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,878 | [
[
-0.0306243896484375,
-0.037322998046875,
0.01297760009765625,
0.022186279296875,
-0.02838134765625,
-0.0249786376953125,
-0.0085296630859375,
-0.0024967193603515625,
0.00359344482421875,
0.016357421875,
-0.049560546875,
-0.046478271484375,
-0.058837890625,
-... |
Nbardy/holycene-diffusers | 2023-04-12T01:23:01.000Z | [
"diffusers",
"region:us"
] | null | Nbardy | null | null | Nbardy/holycene-diffusers | 0 | 2 | diffusers | 2023-04-12T01:23:06 | https://civitai.com/models/24345
in diffusers format for compatibility
All credits go to the original authors | 111 | [
[
-0.01058197021484375,
0.0034351348876953125,
0.057769775390625,
0.06256103515625,
-0.01091766357421875,
-0.025360107421875,
0.0253753662109375,
0.01218414306640625,
0.0131072998046875,
0.0311431884765625,
-0.0173187255859375,
0.00672149658203125,
-0.011199951171... |
davidliu1110/bert-base-chinese-wikiann-zh-ner-2 | 2023-04-12T01:51:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | davidliu1110 | null | null | davidliu1110/bert-base-chinese-wikiann-zh-ner-2 | 0 | 2 | transformers | 2023-04-12T01:32:18 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-wikiann-zh-ner-2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: zh
split: validation
args: zh
metrics:
- name: Precision
type: precision
value: 0.7577054794520548
- name: Recall
type: recall
value: 0.7792363723685264
- name: F1
type: f1
value: 0.7683201136498164
- name: Accuracy
type: accuracy
value: 0.9385963268365817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-wikiann-zh-ner-2
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2036
- Precision: 0.7577
- Recall: 0.7792
- F1: 0.7683
- Accuracy: 0.9386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.555 | 0.16 | 400 | 0.3120 | 0.5949 | 0.7117 | 0.6481 | 0.9041 |
| 0.2944 | 0.32 | 800 | 0.2669 | 0.7013 | 0.7052 | 0.7032 | 0.9230 |
| 0.2814 | 0.48 | 1200 | 0.2354 | 0.7078 | 0.7601 | 0.7330 | 0.9317 |
| 0.2351 | 0.64 | 1600 | 0.2271 | 0.7295 | 0.7715 | 0.7499 | 0.9336 |
| 0.2101 | 0.8 | 2000 | 0.2148 | 0.7478 | 0.7764 | 0.7618 | 0.9369 |
| 0.23 | 0.96 | 2400 | 0.2059 | 0.7586 | 0.7752 | 0.7668 | 0.9385 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,536 | [
[
-0.033905029296875,
-0.041107177734375,
0.0009784698486328125,
0.00940704345703125,
-0.0213623046875,
-0.0345458984375,
-0.0136566162109375,
-0.019866943359375,
0.0225677490234375,
0.021728515625,
-0.049468994140625,
-0.044586181640625,
-0.044525146484375,
-... |
hlyu/nert_0dense | 2023-04-12T01:42:17.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | hlyu | null | null | hlyu/nert_0dense | 0 | 2 | sentence-transformers | 2023-04-12T01:41:50 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hlyu/nert_0dense
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hlyu/nert_0dense')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hlyu/nert_0dense')
model = AutoModel.from_pretrained('hlyu/nert_0dense')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/nert_0dense)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,785 | [
[
-0.0176544189453125,
-0.06304931640625,
0.02069091796875,
0.0249481201171875,
-0.015869140625,
-0.032501220703125,
-0.0181732177734375,
0.00017547607421875,
0.01922607421875,
0.0240020751953125,
-0.049530029296875,
-0.04278564453125,
-0.050018310546875,
0.00... |
tjayant/my_awesome_model_b | 2023-04-12T05:08:08.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tjayant | null | null | tjayant/my_awesome_model_b | 0 | 2 | transformers | 2023-04-12T02:31:11 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tjayant/my_awesome_model_b
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tjayant/my_awesome_model_b
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2676
- Validation Loss: 1.2487
- Train Accuracy: 0.3923
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2280, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3131 | 1.2899 | 0.3832 | 0 |
| 1.2676 | 1.2487 | 0.3923 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,761 | [
[
-0.042236328125,
-0.042999267578125,
0.0237884521484375,
0.007518768310546875,
-0.0295562744140625,
-0.0238037109375,
-0.0137481689453125,
-0.0242919921875,
0.01114654541015625,
0.004314422607421875,
-0.04693603515625,
-0.0491943359375,
-0.051544189453125,
-... |
davidliu1110/bert-base-chinese-wikiann-zh-ner | 2023-04-12T03:05:55.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | davidliu1110 | null | null | davidliu1110/bert-base-chinese-wikiann-zh-ner | 0 | 2 | transformers | 2023-04-12T02:31:56 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-wikiann-zh-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: zh
split: validation
args: zh
metrics:
- name: Precision
type: precision
value: 0.7890612756621219
- name: Recall
type: recall
value: 0.8060513887777155
- name: F1
type: f1
value: 0.797465848346862
- name: Accuracy
type: accuracy
value: 0.9432393178410795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-wikiann-zh-ner
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Precision: 0.7891
- Recall: 0.8061
- F1: 0.7975
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.842 | 0.16 | 400 | 0.3530 | 0.5535 | 0.6872 | 0.6131 | 0.8927 |
| 0.32 | 0.32 | 800 | 0.2800 | 0.6929 | 0.6749 | 0.6838 | 0.9190 |
| 0.2928 | 0.48 | 1200 | 0.2438 | 0.7031 | 0.7661 | 0.7333 | 0.9301 |
| 0.245 | 0.64 | 1600 | 0.2525 | 0.6959 | 0.7919 | 0.7408 | 0.9280 |
| 0.2236 | 0.8 | 2000 | 0.2315 | 0.7441 | 0.7503 | 0.7472 | 0.9342 |
| 0.2444 | 0.96 | 2400 | 0.2119 | 0.7719 | 0.7675 | 0.7697 | 0.9379 |
| 0.1899 | 1.12 | 2800 | 0.2267 | 0.7531 | 0.8062 | 0.7788 | 0.9387 |
| 0.1649 | 1.28 | 3200 | 0.2249 | 0.7519 | 0.8202 | 0.7846 | 0.9395 |
| 0.1521 | 1.44 | 3600 | 0.2220 | 0.7778 | 0.8032 | 0.7903 | 0.9413 |
| 0.1787 | 1.6 | 4000 | 0.2185 | 0.7879 | 0.7860 | 0.7869 | 0.9417 |
| 0.146 | 1.76 | 4400 | 0.2134 | 0.7721 | 0.8128 | 0.7919 | 0.9416 |
| 0.1557 | 1.92 | 4800 | 0.2111 | 0.7857 | 0.8101 | 0.7977 | 0.9429 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 3,083 | [
[
-0.0394287109375,
-0.042938232421875,
0.0033721923828125,
0.00855255126953125,
-0.0161590576171875,
-0.025726318359375,
-0.01068115234375,
-0.0155181884765625,
0.0307769775390625,
0.0230865478515625,
-0.0479736328125,
-0.04931640625,
-0.045989990234375,
-0.0... |
MohammedEltoum/dqn-SpaceInvadersNoFrameskip-v4 | 2023-04-12T03:25:35.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | MohammedEltoum | null | null | MohammedEltoum/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-12T03:24:49 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 584.00 +/- 104.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MohammedEltoum -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MohammedEltoum -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MohammedEltoum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,709 | [
[
-0.042083740234375,
-0.036346435546875,
0.0210113525390625,
0.02447509765625,
-0.01100921630859375,
-0.0159912109375,
0.0131988525390625,
-0.013397216796875,
0.0124053955078125,
0.0249481201171875,
-0.07000732421875,
-0.036529541015625,
-0.028656005859375,
-... |
erickdp/fine-tuning-albert-tiny-041123 | 2023-04-12T05:54:04.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | erickdp | null | null | erickdp/fine-tuning-albert-tiny-041123 | 0 | 2 | transformers | 2023-04-12T03:31:04 | ---
tags:
- generated_from_trainer
metrics:
- precision
- f1
- recall
- accuracy
model-index:
- name: fine-tuning-albert-tiny-041123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuning-albert-tiny-041123
This model is a fine-tuned version of [dccuchile/albert-tiny-spanish](https://huggingface.co/dccuchile/albert-tiny-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2027
- Precision: 0.1111
- F1: 0.1667
- Recall: 0.3333
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | F1 | Recall | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.034 | 1.0 | 1304 | 1.2027 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0266 | 2.0 | 2608 | 1.1847 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0248 | 3.0 | 3912 | 1.1969 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0317 | 4.0 | 5216 | 1.2050 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0285 | 5.0 | 6520 | 1.1994 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0281 | 6.0 | 7824 | 1.1928 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0216 | 7.0 | 9128 | 1.2110 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
| 1.0268 | 8.0 | 10432 | 1.2035 | 0.1111 | 0.1667 | 0.3333 | 0.3333 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,177 | [
[
-0.0469970703125,
-0.037384033203125,
0.01526641845703125,
0.00885772705078125,
-0.01343536376953125,
-0.024566650390625,
-0.01345062255859375,
-0.01580810546875,
0.01557159423828125,
0.0166473388671875,
-0.05206298828125,
-0.04693603515625,
-0.04345703125,
... |
marice/distilbert-base-uncased-finetuned-clinc | 2023-04-12T03:43:09.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | marice | null | null | marice/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-04-12T03:35:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6282 | 2.0 | 636 | 1.8753 | 0.8371 |
| 1.548 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.7952 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,889 | [
[
-0.034820556640625,
-0.041046142578125,
0.012603759765625,
0.0071868896484375,
-0.0266265869140625,
-0.024688720703125,
-0.01287841796875,
-0.0084381103515625,
0.003124237060546875,
0.0218963623046875,
-0.046478271484375,
-0.048553466796875,
-0.05841064453125,
... |
rwang5688/distilgpt2-finetuned-wikitext2-pt | 2023-10-13T21:37:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | rwang5688 | null | null | rwang5688/distilgpt2-finetuned-wikitext2-pt | 1 | 2 | transformers | 2023-04-12T04:23:56 | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-pt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7569 | 1.0 | 2334 | 3.6671 |
| 3.6413 | 2.0 | 4668 | 3.6477 |
| 3.596 | 3.0 | 7002 | 3.6429 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| 1,397 | [
[
-0.0345458984375,
-0.04241943359375,
0.013427734375,
0.0149993896484375,
-0.0296478271484375,
-0.031982421875,
-0.00453948974609375,
-0.0099029541015625,
-0.00899505615234375,
0.01401519775390625,
-0.0579833984375,
-0.026458740234375,
-0.05963134765625,
-0.0... |
Sigwang/distilbert-base-uncased-finetuned-emotion | 2023-04-14T01:51:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Sigwang | null | null | Sigwang/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-12T04:33:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261570669458271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2262
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.837 | 1.0 | 250 | 0.3302 | 0.9015 | 0.8980 |
| 0.2559 | 2.0 | 500 | 0.2262 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037384033203125,
-0.041473388671875,
0.0146331787109375,
0.0218353271484375,
-0.026458740234375,
-0.0192413330078125,
-0.0135040283203125,
-0.0085601806640625,
0.01023101806640625,
0.00748443603515625,
-0.056365966796875,
-0.052093505859375,
-0.06015014648437... |
nes74/distilbert-base-uncased-finetuned-emotion | 2023-06-03T02:56:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | nes74 | null | null | nes74/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-04-12T05:43:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260997886540973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8246 | 1.0 | 250 | 0.3126 | 0.909 | 0.9075 |
| 0.2525 | 2.0 | 500 | 0.2210 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
| 1,840 | [
[
-0.03765869140625,
-0.040313720703125,
0.0145721435546875,
0.02215576171875,
-0.0264129638671875,
-0.01995849609375,
-0.012786865234375,
-0.008636474609375,
0.0099334716796875,
0.00823974609375,
-0.055938720703125,
-0.05206298828125,
-0.059600830078125,
-0.0... |
marice/distilbert-base-uncased-distilled-clinc | 2023-04-12T06:09:45.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | marice | null | null | marice/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-04-12T05:57:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9387096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0878
- Accuracy: 0.9387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0369 | 1.0 | 318 | 0.5902 | 0.6987 |
| 0.4468 | 2.0 | 636 | 0.2434 | 0.8606 |
| 0.2204 | 3.0 | 954 | 0.1412 | 0.9113 |
| 0.1478 | 4.0 | 1272 | 0.1121 | 0.9252 |
| 0.1206 | 5.0 | 1590 | 0.1010 | 0.93 |
| 0.1086 | 6.0 | 1908 | 0.0947 | 0.9345 |
| 0.1009 | 7.0 | 2226 | 0.0916 | 0.9368 |
| 0.0966 | 8.0 | 2544 | 0.0896 | 0.9381 |
| 0.0939 | 9.0 | 2862 | 0.0881 | 0.9390 |
| 0.0928 | 10.0 | 3180 | 0.0878 | 0.9387 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,200 | [
[
-0.034637451171875,
-0.038238525390625,
0.0160980224609375,
0.00658416748046875,
-0.0223236083984375,
-0.01678466796875,
-0.0078125,
-0.0041656494140625,
0.01129150390625,
0.022064208984375,
-0.04412841796875,
-0.049652099609375,
-0.0601806640625,
-0.0095748... |
Jcfranco/distilbert-base-uncased-finetuned-sst2 | 2023-04-12T11:08:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Jcfranco | null | null | Jcfranco/distilbert-base-uncased-finetuned-sst2 | 0 | 2 | transformers | 2023-04-12T06:25:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.908256880733945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3078
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 211 | 0.3078 | 0.9083 |
| No log | 2.0 | 422 | 0.4370 | 0.8968 |
| 0.0968 | 3.0 | 633 | 0.4457 | 0.9002 |
| 0.0968 | 4.0 | 844 | 0.4723 | 0.9048 |
| 0.0259 | 5.0 | 1055 | 0.4991 | 0.9014 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,909 | [
[
-0.0188751220703125,
-0.04852294921875,
0.01419830322265625,
0.01242828369140625,
-0.0307769775390625,
-0.0157623291015625,
-0.0093536376953125,
-0.0040283203125,
0.005950927734375,
0.01432037353515625,
-0.047210693359375,
-0.038055419921875,
-0.06085205078125,
... |
fathyshalab/massive-ar-SA | 2023-04-12T11:08:05.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | fathyshalab | null | null | fathyshalab/massive-ar-SA | 0 | 2 | sentence-transformers | 2023-04-12T11:07:38 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive-ar-SA
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive-ar-SA")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,539 | [
[
-0.0061187744140625,
-0.056060791015625,
0.0242919921875,
-0.016693115234375,
-0.01235198974609375,
-0.01354217529296875,
-0.0154266357421875,
-0.01202392578125,
0.005817413330078125,
0.03143310546875,
-0.0302734375,
-0.0213775634765625,
-0.046295166015625,
... |
SakuraKnight/Yelp-Rating-Prediction | 2023-04-12T18:23:48.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | SakuraKnight | null | null | SakuraKnight/Yelp-Rating-Prediction | 0 | 2 | transformers | 2023-04-12T12:36:28 | ---
license: mit
---
A Demo BERT classification model Trained on (Part of) Yelp Dataset
Photo2Text model: ydshieh/vit-gpt2-coco-en
Expected / Standard Input:
```
[CLS] Business Name [SEP] Address [SEP] City [SEP] Photo2Text Outputs ...
```
Example:
```
[CLS] Paws The Cat Cafe [SEP] 10588 109 Street [SEP] Edmonton [SEP] A cup of coffee
```
Expected Output: 5 | 366 | [
[
-0.0004227161407470703,
-0.04486083984375,
0.043243408203125,
0.0093536376953125,
-0.0260162353515625,
-0.01482391357421875,
0.00907135009765625,
-0.04180908203125,
0.024658203125,
0.04193115234375,
-0.052490234375,
-0.04632568359375,
-0.033966064453125,
0.0... |
ArunaSaraswathy/pii_new_model | 2023-04-12T14:18:32.000Z | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | ArunaSaraswathy | null | null | ArunaSaraswathy/pii_new_model | 0 | 2 | transformers | 2023-04-12T14:08:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,045 | [
[
-0.033050537109375,
-0.056610107421875,
0.012420654296875,
0.0171051025390625,
-0.032684326171875,
-0.01947021484375,
-0.007049560546875,
-0.00699615478515625,
0.0074310302734375,
0.021728515625,
-0.0545654296875,
-0.045196533203125,
-0.05633544921875,
-0.00... |
bblackwell/distilbert-base-uncased-finetuned-cola | 2023-04-25T14:26:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bblackwell | null | null | bblackwell/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-12T14:33:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1581
- Matthews Correlation: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.908469178483356e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 31
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 146 | 0.1384 | 0.8905 |
| No log | 2.0 | 292 | 0.1361 | 0.8738 |
| No log | 3.0 | 438 | 0.1581 | 0.8855 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,606 | [
[
-0.023101806640625,
-0.051788330078125,
0.0158233642578125,
0.021087646484375,
-0.0259246826171875,
-0.0079803466796875,
-0.007232666015625,
-0.0084228515625,
0.0189208984375,
0.0166015625,
-0.045440673828125,
-0.0352783203125,
-0.062164306640625,
-0.0046882... |
jorgeortizfuentes/spanish-spellchecker-flan-t5-base-wiki200000 | 2023-04-13T07:55:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | jorgeortizfuentes | null | null | jorgeortizfuentes/spanish-spellchecker-flan-t5-base-wiki200000 | 0 | 2 | transformers | 2023-04-12T15:03:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: spanish-spellchecker-flan-t5-base-wiki200000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-spellchecker-flan-t5-base-wiki200000
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1471
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 0.1876 | 1.0 | 9755 | 0.1550 | 0.0 | 19.0 |
| 0.1768 | 2.0 | 19510 | 0.1471 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| 1,584 | [
[
-0.02783203125,
-0.036468505859375,
0.00994110107421875,
0.01425933837890625,
-0.01134490966796875,
-0.027679443359375,
-0.0186767578125,
-0.027069091796875,
0.01436614990234375,
0.0201416015625,
-0.049102783203125,
-0.0526123046875,
-0.0546875,
0.0068969726... |
GhifSmile/distilbert-base-uncased-PINA-dfnew | 2023-04-12T18:42:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | GhifSmile | null | null | GhifSmile/distilbert-base-uncased-PINA-dfnew | 0 | 2 | transformers | 2023-04-12T15:33:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-PINA-dfnew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-PINA-dfnew
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2599
- Accuracy: 0.9510
- Precision: 0.8737
- Recall: 0.8532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
| 1.1798 | 1.0 | 1438 | 0.4320 | 0.9016 | 0.7777 | 0.7182 |
| 0.2987 | 2.0 | 2876 | 0.2779 | 0.9369 | 0.8340 | 0.8270 |
| 0.1579 | 3.0 | 4314 | 0.2608 | 0.9445 | 0.8374 | 0.8378 |
| 0.0913 | 4.0 | 5752 | 0.2599 | 0.9510 | 0.8737 | 0.8532 |
| 0.0547 | 5.0 | 7190 | 0.2716 | 0.9531 | 0.8893 | 0.8682 |
| 0.0309 | 6.0 | 8628 | 0.2748 | 0.9531 | 0.8921 | 0.8750 |
| 0.0174 | 7.0 | 10066 | 0.2860 | 0.9545 | 0.8966 | 0.8710 |
| 0.01 | 8.0 | 11504 | 0.2972 | 0.9543 | 0.9087 | 0.8989 |
| 0.0063 | 9.0 | 12942 | 0.3012 | 0.9536 | 0.9066 | 0.8967 |
| 0.0044 | 10.0 | 14380 | 0.2978 | 0.9551 | 0.9108 | 0.8997 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,253 | [
[
-0.036468505859375,
-0.041046142578125,
0.018096923828125,
0.011322021484375,
-0.017669677734375,
-0.01181793212890625,
0.00007522106170654297,
-0.001987457275390625,
0.0262908935546875,
0.01904296875,
-0.046600341796875,
-0.050933837890625,
-0.0550537109375,
... |
AyoubChLin/XLMRoberta-large-bbc_news | 2023-04-12T18:13:52.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"en",
"dataset:AyoubChLin/autotrain-data-anymodel_bbc",
"dataset:SetFit/bbc-news",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/XLMRoberta-large-bbc_news | 0 | 2 | transformers | 2023-04-12T16:32:48 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: A new model offers an explanation for how the Galilean satellites formed around the solar system’s largest world. Konstantin Batygin did not set out to solve one of the solar system’s most puzzling mysteries when he went for a run up a hill in Nice, France. Dr. Batygin, a Caltech researcher
datasets:
- AyoubChLin/autotrain-data-anymodel_bbc
- SetFit/bbc-news
co2_eq_emissions:
emissions: 2.359134715120443
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48900118383
- CO2 Emissions (in grams): 2.3591
## Validation Metrics
- Loss: 0.116
- Accuracy: 0.978
- Macro F1: 0.978
- Micro F1: 0.978
- Weighted F1: 0.978
- Macro Precision: 0.978
- Micro Precision: 0.978
- Weighted Precision: 0.978
- Macro Recall: 0.978
- Micro Recall: 0.978
- Weighted Recall: 0.978
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-anymodel_bbc-48900118383
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-anymodel_bbc-48900118383", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-anymodel_bbc-48900118383", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,666 | [
[
-0.033538818359375,
-0.0256195068359375,
0.00873565673828125,
0.01190948486328125,
-0.0028438568115234375,
0.0009083747863769531,
0.0021877288818359375,
-0.013275146484375,
-0.00107574462890625,
0.01071929931640625,
-0.05224609375,
-0.031097412109375,
-0.0574645... |
Telstema/distilbert-base-uncased-finetuned-cola | 2023-04-13T01:16:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Telstema | null | null | Telstema/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-04-12T16:36:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.56217893832047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7649
- Matthews Correlation: 0.5622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5218 | 1.0 | 535 | 0.5275 | 0.4033 |
| 0.3492 | 2.0 | 1070 | 0.5052 | 0.4987 |
| 0.2362 | 3.0 | 1605 | 0.5527 | 0.5382 |
| 0.1763 | 4.0 | 2140 | 0.7443 | 0.5378 |
| 0.1212 | 5.0 | 2675 | 0.7649 | 0.5622 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,040 | [
[
-0.02227783203125,
-0.04998779296875,
0.01131439208984375,
0.0186767578125,
-0.0230255126953125,
-0.007778167724609375,
-0.00502777099609375,
-0.0036029815673828125,
0.0235137939453125,
0.0097808837890625,
-0.0447998046875,
-0.03509521484375,
-0.06268310546875,
... |
OMARS200/SpaceInvadersNoFrameskip-v4 | 2023-04-12T16:53:03.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | OMARS200 | null | null | OMARS200/SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-04-12T16:52:27 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 587.50 +/- 217.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OMARS200 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OMARS200 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga OMARS200
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.041412353515625,
-0.0361328125,
0.0218963623046875,
0.0251617431640625,
-0.009979248046875,
-0.0164794921875,
0.01264190673828125,
-0.01331329345703125,
0.0131988525390625,
0.02447509765625,
-0.07049560546875,
-0.035888671875,
-0.027557373046875,
-0.00504... |
asubiabre/ppo-PyramidsTraining | 2023-04-12T17:32:41.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | asubiabre | null | null | asubiabre/ppo-PyramidsTraining | 0 | 2 | ml-agents | 2023-04-12T17:32:35 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: asubiabre/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 960 | [
[
-0.026397705078125,
-0.018798828125,
-0.0010633468627929688,
0.0274200439453125,
-0.00982666015625,
0.0062103271484375,
0.0273895263671875,
-0.002490997314453125,
0.034820556640625,
0.0357666015625,
-0.036285400390625,
-0.052215576171875,
-0.035400390625,
-0... |
AyoubChLin/DistilRoberta-bbc_news | 2023-04-12T21:49:32.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:AyoubChLin/autotrain-data-distilroberta-bbc_news",
"dataset:SetFit/bbc-news",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/DistilRoberta-bbc_news | 0 | 2 | transformers | 2023-04-12T18:54:42 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: I love AutoTrain 🤗
datasets:
- AyoubChLin/autotrain-data-distilroberta-bbc_news
- SetFit/bbc-news
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48937118428
- CO2 Emissions (in grams): 0.6873
## Validation Metrics
- Loss: 0.063
- Accuracy: 0.985
- Macro F1: 0.984
- Micro F1: 0.985
- Weighted F1: 0.985
- Macro Precision: 0.984
- Micro Precision: 0.985
- Weighted Precision: 0.985
- Macro Recall: 0.985
- Micro Recall: 0.985
- Weighted Recall: 0.985
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-distilroberta-bbc_news-48937118428
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-distilroberta-bbc_news-48937118428", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-distilroberta-bbc_news-48937118428", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,385 | [
[
-0.0310211181640625,
-0.027984619140625,
0.007587432861328125,
0.01053619384765625,
-0.0030307769775390625,
0.006420135498046875,
-0.0017786026000976562,
-0.0133819580078125,
-0.00475311279296875,
0.004486083984375,
-0.0458984375,
-0.03326416015625,
-0.058197021... |
AyoubChLin/delberta_large_bbc_news | 2023-04-12T19:25:14.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:AyoubChLin/autotrain-data-delberta-large",
"dataset:SetFit/bbc-news",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/delberta_large_bbc_news | 0 | 2 | transformers | 2023-04-12T18:59:38 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: I love AutoTrain 🤗
datasets:
- AyoubChLin/autotrain-data-delberta-large
- SetFit/bbc-news
co2_eq_emissions:
emissions: 4.083685268664441
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48938118433
- CO2 Emissions (in grams): 4.0837
## Validation Metrics
- Loss: 0.130
- Accuracy: 0.980
- Macro F1: 0.980
- Micro F1: 0.980
- Weighted F1: 0.980
- Macro Precision: 0.980
- Micro Precision: 0.980
- Weighted Precision: 0.980
- Macro Recall: 0.980
- Micro Recall: 0.980
- Weighted Recall: 0.980
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-delberta-large-48938118433
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-delberta-large-48938118433", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-delberta-large-48938118433", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,402 | [
[
-0.034515380859375,
-0.0261993408203125,
0.0123291015625,
0.01351165771484375,
-0.00524139404296875,
0.0036334991455078125,
0.00125885009765625,
-0.0153350830078125,
0.004543304443359375,
0.00603485107421875,
-0.050140380859375,
-0.03363037109375,
-0.05859375,
... |
AyoubChLin/Albert-bbc-news | 2023-04-12T21:39:16.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain",
"en",
"dataset:AyoubChLin/autotrain-data-albert-bbc-news",
"dataset:SetFit/bbc-news",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/Albert-bbc-news | 0 | 2 | transformers | 2023-04-12T19:00:44 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: I love AutoTrain 🤗
datasets:
- AyoubChLin/autotrain-data-albert-bbc-news
- SetFit/bbc-news
co2_eq_emissions:
emissions: 13.344689233410659
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48939118438
- CO2 Emissions (in grams): 13.3447
## Validation Metrics
- Loss: 0.103
- Accuracy: 0.978
- Macro F1: 0.978
- Micro F1: 0.978
- Weighted F1: 0.978
- Macro Precision: 0.977
- Micro Precision: 0.978
- Weighted Precision: 0.978
- Macro Recall: 0.978
- Micro Recall: 0.978
- Weighted Recall: 0.978
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-albert-bbc-news-48939118438
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-albert-bbc-news-48939118438", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-albert-bbc-news-48939118438", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,407 | [
[
-0.0343017578125,
-0.0265045166015625,
0.0084228515625,
0.01398468017578125,
-0.0028095245361328125,
0.004566192626953125,
0.000033855438232421875,
-0.01181793212890625,
-0.0031337738037109375,
0.00844573974609375,
-0.049407958984375,
-0.0313720703125,
-0.057678... |
AyoubChLin/roberta-large-bbc_news | 2023-08-08T20:29:24.000Z | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:AyoubChLin/autotrain-data-roberta-large-bbc_news",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | AyoubChLin | null | null | AyoubChLin/roberta-large-bbc_news | 0 | 2 | transformers | 2023-04-12T19:09:36 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- AyoubChLin/autotrain-data-roberta-large-bbc_news
co2_eq_emissions:
emissions: 1.9843929651071104
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 48943118458
- CO2 Emissions (in grams): 1.9844
## Validation Metrics
- Loss: 0.062
- Accuracy: 0.991
- Macro F1: 0.991
- Micro F1: 0.991
- Weighted F1: 0.991
- Macro Precision: 0.991
- Micro Precision: 0.991
- Weighted Precision: 0.991
- Macro Recall: 0.992
- Micro Recall: 0.991
- Weighted Recall: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AyoubChLin/autotrain-roberta-large-bbc_news-48943118458
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AyoubChLin/autotrain-roberta-large-bbc_news-48943118458", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AyoubChLin/autotrain-roberta-large-bbc_news-48943118458", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,345 | [
[
-0.031005859375,
-0.0267181396484375,
0.00868988037109375,
0.0110931396484375,
-0.0017299652099609375,
0.004180908203125,
-0.0026988983154296875,
-0.0135498046875,
-0.002231597900390625,
0.00589752197265625,
-0.04718017578125,
-0.033935546875,
-0.057281494140625... |
bblackwell/distilbert-base-uncased-finetuned-cola-Christianity | 2023-04-25T19:04:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | bblackwell | null | null | bblackwell/distilbert-base-uncased-finetuned-cola-Christianity | 0 | 2 | transformers | 2023-04-12T20:00:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-Christianity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-Christianity
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2016
- Matthews Correlation: 0.8950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2197 | 1.0 | 615 | 0.1455 | 0.8815 |
| 0.13 | 2.0 | 1230 | 0.1453 | 0.8853 |
| 0.0859 | 3.0 | 1845 | 0.1854 | 0.8879 |
| 0.0511 | 4.0 | 2460 | 0.2016 | 0.8950 |
| 0.0183 | 5.0 | 3075 | 0.2132 | 0.8940 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,764 | [
[
-0.0287017822265625,
-0.046417236328125,
0.01267242431640625,
0.0171051025390625,
-0.0265350341796875,
-0.00826263427734375,
-0.00733184814453125,
-0.0083465576171875,
0.013763427734375,
0.0149688720703125,
-0.04791259765625,
-0.04132080078125,
-0.06130981445312... |
lugrenl/bert-base-banking77-pt2 | 2023-04-12T21:44:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | lugrenl | null | null | lugrenl/bert-base-banking77-pt2 | 0 | 2 | transformers | 2023-04-12T20:47:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9360461829994651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3261
- F1: 0.9360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5369 | 1.0 | 2501 | 0.4475 | 0.8808 |
| 0.2189 | 2.0 | 5002 | 0.3341 | 0.9290 |
| 0.1552 | 3.0 | 7503 | 0.3261 | 0.9360 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.11.0
| 1,728 | [
[
-0.0299530029296875,
-0.0396728515625,
0.0121002197265625,
0.0131072998046875,
-0.042266845703125,
-0.0274658203125,
-0.00896453857421875,
-0.018157958984375,
-0.00417327880859375,
0.041259765625,
-0.042999267578125,
-0.04339599609375,
-0.052093505859375,
-0... |
DrewG/Tale_2_Cities | 2023-04-12T21:26:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | DrewG | null | null | DrewG/Tale_2_Cities | 0 | 2 | transformers | 2023-04-12T20:55:39 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Tale_2_Cities
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tale_2_Cities
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 989 | [
[
-0.027740478515625,
-0.050323486328125,
0.037078857421875,
0.00719451904296875,
-0.024627685546875,
-0.0212249755859375,
0.004329681396484375,
-0.031951904296875,
-0.0005044937133789062,
0.0238037109375,
-0.04718017578125,
-0.035736083984375,
-0.047332763671875,... |
platzi/platzi-distilroberta-base-mrpc-glue-oscar-moreno | 2023-04-12T22:13:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-oscar-moreno | 0 | 2 | transformers | 2023-04-12T21:47:10 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-oscar-moreno
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8161764705882353
- name: F1
type: f1
value: 0.8695652173913044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-oscar-moreno
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5426
- Accuracy: 0.8162
- F1: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5255 | 1.09 | 500 | 0.5426 | 0.8162 | 0.8696 |
| 0.3669 | 2.18 | 1000 | 0.5466 | 0.8480 | 0.8869 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,884 | [
[
-0.0305328369140625,
-0.03570556640625,
0.006641387939453125,
0.0182952880859375,
-0.030426025390625,
-0.0249786376953125,
-0.005229949951171875,
-0.00688934326171875,
0.01181793212890625,
0.007488250732421875,
-0.049652099609375,
-0.038360595703125,
-0.05941772... |
auditi41/wav2vec2-large-xlsr-53-Bangla | 2023-04-13T18:52:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | auditi41 | null | null | auditi41/wav2vec2-large-xlsr-53-Bangla | 1 | 2 | transformers | 2023-04-12T23:42:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-Bangla
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: bn
split: train+validation
args: bn
metrics:
- name: Wer
type: wer
value: 0.5442110214000156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Bangla
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6125
- Wer: 0.5442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6881 | 2.28 | 600 | 1.0325 | 0.9634 |
| 0.8087 | 4.56 | 1200 | 0.6090 | 0.7430 |
| 0.5089 | 6.84 | 1800 | 0.5156 | 0.6615 |
| 0.3864 | 9.13 | 2400 | 0.5287 | 0.6676 |
| 0.3064 | 11.41 | 3000 | 0.5411 | 0.6278 |
| 0.2535 | 13.69 | 3600 | 0.5206 | 0.6149 |
| 0.216 | 15.97 | 4200 | 0.5596 | 0.6120 |
| 0.1852 | 18.25 | 4800 | 0.5658 | 0.5821 |
| 0.1653 | 20.53 | 5400 | 0.5938 | 0.5521 |
| 0.1499 | 22.81 | 6000 | 0.5825 | 0.5645 |
| 0.1323 | 25.09 | 6600 | 0.6151 | 0.5593 |
| 0.122 | 27.38 | 7200 | 0.6046 | 0.5556 |
| 0.1118 | 29.66 | 7800 | 0.6125 | 0.5442 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| 2,522 | [
[
-0.033905029296875,
-0.0330810546875,
-0.002201080322265625,
0.01416015625,
-0.017242431640625,
-0.0129547119140625,
-0.01476287841796875,
-0.0205078125,
0.0129547119140625,
0.02532958984375,
-0.059844970703125,
-0.04022216796875,
-0.0487060546875,
-0.019393... |
Muhsabrys/autotrain-iuexistmulti-49035118635 | 2023-04-13T02:15:21.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iuexistmulti",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iuexistmulti-49035118635 | 0 | 2 | transformers | 2023-04-13T02:13:14 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iuexistmulti
co2_eq_emissions:
emissions: 0.8019084818135189
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49035118635
- CO2 Emissions (in grams): 0.8019
## Validation Metrics
- Loss: 0.691
- Accuracy: 0.743
- Macro F1: 0.521
- Micro F1: 0.743
- Weighted F1: 0.704
- Macro Precision: 0.495
- Micro Precision: 0.743
- Weighted Precision: 0.668
- Macro Recall: 0.550
- Micro Recall: 0.743
- Weighted Recall: 0.743
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iuexistmulti-49035118635
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iuexistmulti-49035118635", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iuexistmulti-49035118635", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,301 | [
[
-0.034423828125,
-0.021148681640625,
0.0106048583984375,
0.0106201171875,
-0.00018155574798583984,
0.005218505859375,
0.0005030632019042969,
-0.01424407958984375,
-0.00426483154296875,
0.0034732818603515625,
-0.048736572265625,
-0.0305938720703125,
-0.0572814941... |
vishwapatel123/Toxic-comment | 2023-04-13T02:28:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | vishwapatel123 | null | null | vishwapatel123/Toxic-comment | 0 | 2 | transformers | 2023-04-13T02:25:01 | ---
license: afl-3.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
## Name
Vishwa Patel
## Project
Toxic Comment Classification
## Model description
This model is a fine-tuned version of the [bert-base-uncased](https://huggingface.co/transformers/model_doc/bert.html) model to classify toxic comments.
## Training data
The training data comes from this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 90% of the `train.csv` data to train the model.
| 570 | [
[
-0.019775390625,
-0.0303802490234375,
0.0241241455078125,
0.006725311279296875,
-0.020294189453125,
0.01276397705078125,
0.0089874267578125,
-0.0190887451171875,
0.0003657341003417969,
0.043701171875,
-0.044403076171875,
-0.031402587890625,
-0.05303955078125,
... |
Muhsabrys/autotrain-iuexist_twhin-49038118649 | 2023-04-13T02:35:17.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Muhsabrys/autotrain-data-iuexist_twhin",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Muhsabrys | null | null | Muhsabrys/autotrain-iuexist_twhin-49038118649 | 0 | 2 | transformers | 2023-04-13T02:31:37 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Muhsabrys/autotrain-data-iuexist_twhin
co2_eq_emissions:
emissions: 1.410217361850194
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 49038118649
- CO2 Emissions (in grams): 1.4102
## Validation Metrics
- Loss: 0.636
- Accuracy: 0.766
- Macro F1: 0.537
- Micro F1: 0.766
- Weighted F1: 0.725
- Macro Precision: 0.511
- Micro Precision: 0.766
- Weighted Precision: 0.690
- Macro Recall: 0.567
- Micro Recall: 0.766
- Weighted Recall: 0.766
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Muhsabrys/autotrain-iuexist_twhin-49038118649
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Muhsabrys/autotrain-iuexist_twhin-49038118649", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Muhsabrys/autotrain-iuexist_twhin-49038118649", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,304 | [
[
-0.031341552734375,
-0.0219879150390625,
0.0113525390625,
0.009918212890625,
-0.0011653900146484375,
0.00787353515625,
-0.0009832382202148438,
-0.01470947265625,
-0.007083892822265625,
0.0036182403564453125,
-0.049102783203125,
-0.033294677734375,
-0.05691528320... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.