modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Melanol/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-25T15:10:56.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Melanol | null | null | Melanol/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-25T15:10:20 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 573.00 +/- 199.21
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Melanol -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Melanol -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Melanol
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 2),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,756 | [
[
-0.043182373046875,
-0.039764404296875,
0.0202484130859375,
0.0258941650390625,
-0.01172637939453125,
-0.0182647705078125,
0.009735107421875,
-0.01296234130859375,
0.01345062255859375,
0.0234375,
-0.0718994140625,
-0.036712646484375,
-0.02508544921875,
-0.00... |
jclynn/finetuning-sentiment-es-synthetic-samples | 2023-06-25T17:49:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | jclynn | null | null | jclynn/finetuning-sentiment-es-synthetic-samples | 0 | 2 | transformers | 2023-06-25T16:48:16 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-es-synthetic-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-es-synthetic-samples
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6517
- Accuracy: 0.8889
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,224 | [
[
-0.045257568359375,
-0.056854248046875,
0.013641357421875,
0.0311279296875,
-0.03472900390625,
-0.01303863525390625,
-0.030303955078125,
-0.01453399658203125,
0.02130126953125,
0.0170440673828125,
-0.0673828125,
-0.058685302734375,
-0.04248046875,
-0.0045661... |
Shridipta-06/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-25T18:57:49.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Shridipta-06 | null | null | Shridipta-06/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-25T18:57:16 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 487.00 +/- 104.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Shridipta-06 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Shridipta-06 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Shridipta-06
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,769 | [
[
-0.042694091796875,
-0.03985595703125,
0.0187835693359375,
0.02520751953125,
-0.0115203857421875,
-0.0178070068359375,
0.010009765625,
-0.0131072998046875,
0.01273345947265625,
0.021697998046875,
-0.07171630859375,
-0.03411865234375,
-0.0251007080078125,
-0.... |
DunnBC22/sentence-t5-large-FT-Quora_Sentence_Similarity-400 | 2023-07-20T17:30:23.000Z | [
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | sentence-similarity | DunnBC22 | null | null | DunnBC22/sentence-t5-large-FT-Quora_Sentence_Similarity-400 | 1 | 2 | sentence-transformers | 2023-06-26T05:23:16 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
---
# Quora Sentence Similarity
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Semantic_Similarity/Semantic%20Similarity-large.ipynb
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
| Metric | Measure | Value | Notes |
| :--------: | :--------: | :--------: | :--------: |
| Accuracy | **Cosine-Similarity** | 88.72 | Threshold: 0.8397 |
| F1 | Cosine-Similarity | 85.22 | Threshold: 0.8223 |
| Precision | Cosine-Similarity | 80.72 | - |
| Recall | Cosine-Similarity | 90.25 | - |
| Average Precision | Cosine-Similarity | 89.75 | - |
| Accuracy | **Manhattan-Distance** | 88.71 | Threshold: 12.4351 |
| F1 | Manhattan-Distance | 85.22 | Threshold: 13.2209 |
| Precision | Manhattan-Distance | 80.58 | - |
| Recall | Manhattan-Distance | 90.42 | - |
| Average Precision | Manhattan-Distance | 89.74 | - |
| Accuracy | **Euclidean-Distance** | 88.72 | Threshold: 0.5662 |
| F1 | Euclidean-Distance | 85.22 | Threshold: 0.5962 |
| Precision | Euclidean-Distance | 80.72 | - |
| Recall | Euclidean-Distance | 90.25 | - |
| Average Precision | Euclidean-Distance | 89.75 | - |
| Accuracy | **Dot-Product** | 88.72 | Threshold: 0.8397 |
| F1 | Dot-Product | 85.22 | Threshold: 0.8223 |
| Precision | Dot-Product | 80.72 | - |
| Recall | Dot-Product | 90.25 | - |
| Average Precision | Dot-Product | 89.75 | - |
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 20,
"weight_decay": 0.01
}
```
**Potential Improvements**
One way to improve the results of this model is to use a larger checkpoint of T5. This was trained with the T5-large checkpoint.
The larger checkpoints are:
| Checkpoint | # of Train Params |
| :--------: | :--------: |
| T5-Base | 220 Million |
| T5-Large | 770 Million* |
| T5-3B | 3 Billion |
| T5-11B | 11 Billion |
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 34, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 1024, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
Dataset Source: https://www.kaggle.com/datasets/quora/question-pairs-dataset | 4,139 | [
[
-0.021331787109375,
-0.056549072265625,
0.032806396484375,
0.01142120361328125,
-0.01690673828125,
-0.0181427001953125,
-0.01522064208984375,
-0.01126861572265625,
0.0186767578125,
0.0279693603515625,
-0.045074462890625,
-0.0599365234375,
-0.04486083984375,
... |
yhna/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-27T05:29:43.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | yhna | null | null | yhna/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-26T08:52:38 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 820.50 +/- 249.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yhna -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yhna -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yhna
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,747 | [
[
-0.043304443359375,
-0.0394287109375,
0.01934814453125,
0.025115966796875,
-0.01107025146484375,
-0.0173187255859375,
0.010711669921875,
-0.0133056640625,
0.01277923583984375,
0.02276611328125,
-0.07244873046875,
-0.034423828125,
-0.0250701904296875,
-0.0042... |
rifkat/uz_kr_lang-detection | 2023-06-26T13:19:19.000Z | [
"transformers",
"pytorch",
"text-classification",
"uz",
"en",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | rifkat | null | null | rifkat/uz_kr_lang-detection | 0 | 2 | transformers | 2023-06-26T11:06:29 | ---
pipeline_tag: text-classification
language:
- uz
- en
- ru
license: apache-2.0
metrics:
- accuracy
---
<p><b> Til identifikatori.</b>
Tabiiy tilni qayta ishlash (NLP) sohasida tilni aniqlash vazifasi ma'lum matn yoki hujjat tilini aniqlashni o'z ichiga oladi,
ammo ko'plab tillarni aniqlash qobiliyati qiyinlashadi. Ushbu model matndan 21 tilni tanib oladi,xususan, oʻzbek tilida
qoʻllaniladigan lotin-kirill yozuviga eʼtibor qaratadi. Bu boradagi tadqiqotlar kamligini hisobga olib, mos transformator
arxitekturasiga asoslangan oʻzbek lotin-kirill yozuvini aniqlik darajasi yuqori boʻlgan tilni aniqlash modelini taqdim etamiz.
Modelimiz biz yaratgan o‘zbek tili korpusidan foydalangan holda baholandi, bu ham kelajakda o‘zbek tilini aniqlash vazifalarini
baholash uchun qimmatli manba bo‘lib xizmat qilishi mumkin.Ushbu model 21 ta tilni, jumladan, ikkita alifboda (lotin va kirill)
ifodalangan o‘zbek tilini qamrab oladi.
<p><b> Language identifier. </b>
The task of language identification in Natural Language Processing (NLP) involves identifying the language of a particular text or document,
but the ability to identify multiple languages can be challenging. This model is capable of recognizing 21 languages from text, specifically
focusing on the Latin-Cyrillic script used in Uzbek. Considering the scarcity of research in this area, we present a language identification
model with a high degree of accuracy for the Uzbek Latin-Cyrillic script, based on the relevant transformer architecture. Our model has been
evaluated using the Uzbek corpus that we created, which can potentially serve as a valuable resource for evaluating language identification
tasks for Uzbek in the future. This model encompasses 21 languages, including Uzbek expressed in two scripts (Latin and Cyrillic).
| 1,814 | [
[
-0.00969696044921875,
-0.037445068359375,
0.0162506103515625,
0.0278778076171875,
-0.02777099609375,
0.02105712890625,
-0.0252685546875,
-0.02716064453125,
-0.007366180419921875,
0.042266845703125,
-0.031646728515625,
-0.049591064453125,
-0.030426025390625,
... |
roshan77/ML-Agents-Pyramids_v1 | 2023-06-26T16:24:52.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | roshan77 | null | null | roshan77/ML-Agents-Pyramids_v1 | 0 | 2 | ml-agents | 2023-06-26T16:24:51 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: roshan77/ML-Agents-Pyramids_v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,342 | [
[
-0.040985107421875,
-0.034912109375,
0.0014581680297851562,
0.01430511474609375,
-0.00981903076171875,
0.013641357421875,
0.016571044921875,
-0.015899658203125,
0.034423828125,
0.0296630859375,
-0.042816162109375,
-0.0499267578125,
-0.02935791015625,
-0.0155... |
Ryukijano/distilhubert-finetuned-gtzan | 2023-07-05T05:12:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"music",
"dataset:marsyas/gtzan",
"license:openrail",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | Ryukijano | null | null | Ryukijano/distilhubert-finetuned-gtzan | 0 | 2 | transformers | 2023-06-26T16:29:47 | ---
license: openrail
datasets:
- marsyas/gtzan
tags:
- music
pipeline_tag: audio-classification
metrics:
- accuracy 84%
library_name: transformers
---
Model finetuned as a part of the HF audio course. | 201 | [
[
-0.039154052734375,
-0.0526123046875,
0.008087158203125,
-0.0005478858947753906,
-0.0027313232421875,
-0.01033782958984375,
0.00008118152618408203,
-0.0201873779296875,
0.0185089111328125,
0.06201171875,
-0.0732421875,
-0.0101776123046875,
-0.0004777908325195312... |
anwarrehman/simply-law-classify-v1.0 | 2023-06-26T16:31:03.000Z | [
"keras",
"region:us"
] | null | anwarrehman | null | null | anwarrehman/simply-law-classify-v1.0 | 0 | 2 | keras | 2023-06-26T16:30:27 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
SAcharya123123123Wafflehouse/simply-law-textgen-v1.0 | 2023-06-26T18:54:40.000Z | [
"keras",
"region:us"
] | null | SAcharya123123123Wafflehouse | null | null | SAcharya123123123Wafflehouse/simply-law-textgen-v1.0 | 0 | 2 | keras | 2023-06-26T18:54:31 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
magnustragardh/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-26T19:28:48.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | magnustragardh | null | null | magnustragardh/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-26T19:28:26 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 861.00 +/- 348.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga magnustragardh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga magnustragardh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga magnustragardh
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,777 | [
[
-0.043182373046875,
-0.039154052734375,
0.0200042724609375,
0.0245208740234375,
-0.011383056640625,
-0.0172882080078125,
0.00959014892578125,
-0.013427734375,
0.0131072998046875,
0.0218963623046875,
-0.072509765625,
-0.03485107421875,
-0.025390625,
-0.002929... |
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k | 2023-07-04T21:39:29.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Panchovix | null | null | Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k | 1 | 2 | transformers | 2023-06-26T20:36:56 | ---
license: other
---
[WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), without quant. (Full FP16 model) | 293 | [
[
-0.04730224609375,
-0.0474853515625,
-0.00513458251953125,
0.0188751220703125,
-0.032928466796875,
0.0195770263671875,
0.01103973388671875,
-0.05462646484375,
0.045074462890625,
0.070556640625,
-0.0654296875,
-0.023468017578125,
-0.027587890625,
-0.010856628... |
dicta-il/BEREL | 2023-07-03T15:37:10.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | dicta-il | null | null | dicta-il/BEREL | 0 | 2 | transformers | 2023-06-26T22:35:48 | ---
license: apache-2.0
language:
- he
library_name: transformers
tags:
- bert
---
> Update 2023-5-23: This model is `BEREL` version 1.0. We are now happy to provide a much improved `BEREL_2.0`.
# Introducing BEREL: BERT Embeddings for Rabbinic-Encoded Language
When using BEREL, please reference:
Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Cheyn Shmuel Shmidman, Eli Handel, Moshe Koppel, "Introducing BEREL: BERT Embeddings for Rabbinic-Encoded Language", Aug 2022 [arXiv:2208.01875]
1. Usage:
```python
from transformers import AutoTokenizer, BertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('dicta-il/BEREL')
model = BertForMaskedLM.from_pretrained('dicta-il/BEREL')
# for evaluation, disable dropout
model.eval()
```
> NOTE: This code will **not** work and provide bad results if you use `BertTokenizer`. Please use `AutoTokenizer` or `BertTokenizerFast`.
2. Demo site:
You can experiment with the model in a GUI interface here: https://dicta-bert-demo.netlify.app/?genre=rabbinic
- The main part of the GUI consists of word buttons visualizing the tokenization of the sentences. Clicking on a button masks it, and then three BEREL word predictions are shown. Clicking on that bubble expands it to 10 predictions; alternatively, ctrl-clicking on that initial bubble expands to 30 predictions.
- Ctrl-clicking adjacent word buttons combines them into a single token for the mask.
- The edit box on top contains the input sentence; this can be modified at will, and the word-buttons will adjust as relevant.
| 1,542 | [
[
-0.02838134765625,
-0.06158447265625,
-0.0031948089599609375,
0.035675048828125,
-0.0295562744140625,
-0.007625579833984375,
-0.040313720703125,
-0.02581787109375,
0.01007843017578125,
0.014556884765625,
-0.04449462890625,
-0.0286407470703125,
-0.08038330078125,... |
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g | 2023-07-06T18:09:47.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Panchovix | null | null | Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g | 0 | 2 | transformers | 2023-06-26T22:50:20 | ---
license: other
---
[WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 | 647 | [
[
-0.050537109375,
-0.074951171875,
0.0053863525390625,
0.0277252197265625,
-0.054656982421875,
0.0034389495849609375,
0.01049041748046875,
-0.04144287109375,
0.02001953125,
0.053680419921875,
-0.04052734375,
-0.033050537109375,
-0.0303955078125,
-0.0054855346... |
ifti98/dqn-SpaceInvadersNoFrameskip | 2023-06-27T06:23:22.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | ifti98 | null | null | ifti98/dqn-SpaceInvadersNoFrameskip | 0 | 2 | stable-baselines3 | 2023-06-27T06:22:47 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 592.00 +/- 93.84
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ifti98 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ifti98 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ifti98
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,752 | [
[
-0.04339599609375,
-0.03814697265625,
0.019775390625,
0.025482177734375,
-0.0107269287109375,
-0.0165557861328125,
0.01033782958984375,
-0.012359619140625,
0.01189422607421875,
0.0225372314453125,
-0.072509765625,
-0.03448486328125,
-0.024932861328125,
-0.00... |
memotirre90/Equipo16_gpt2-HotelSentiment | 2023-06-27T08:08:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | memotirre90 | null | null | memotirre90/Equipo16_gpt2-HotelSentiment | 0 | 2 | transformers | 2023-06-27T07:01:15 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Equipo16_gpt2-HotelSentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Equipo16_gpt2-HotelSentiment
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Accuracy: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| 1,182 | [
[
-0.0286712646484375,
-0.04400634765625,
0.0218505859375,
0.0092620849609375,
-0.044219970703125,
-0.0243988037109375,
-0.018707275390625,
-0.0188751220703125,
0.0147857666015625,
0.013671875,
-0.0477294921875,
-0.043670654296875,
-0.04388427734375,
-0.000991... |
olianate/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-27T07:27:02.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | olianate | null | null | olianate/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-27T07:10:54 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 673.50 +/- 135.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga olianate -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga olianate -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga olianate
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,759 | [
[
-0.04315185546875,
-0.04010009765625,
0.0196533203125,
0.025543212890625,
-0.01097869873046875,
-0.018524169921875,
0.00958251953125,
-0.01337432861328125,
0.0127716064453125,
0.02215576171875,
-0.07080078125,
-0.034912109375,
-0.0242156982421875,
-0.0035057... |
Braddy/git-base-test3 | 2023-10-22T21:28:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"git",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | Braddy | null | null | Braddy/git-base-test3 | 0 | 2 | transformers | 2023-06-27T17:15:26 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: git-base-test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-test3
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0608
- eval_wer_score: 21.6785
- eval_rouge_score: 0.0467
- eval_bert_score: 0.7569
- eval_runtime: 34.8187
- eval_samples_per_second: 2.901
- eval_steps_per_second: 0.373
- epoch: 41.11
- step: 50500
CIDEr-D: 106.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,385 | [
[
-0.032623291015625,
-0.05755615234375,
0.01291656494140625,
-0.0007443428039550781,
-0.02789306640625,
-0.00818634033203125,
-0.008056640625,
-0.00972747802734375,
0.015472412109375,
0.0199127197265625,
-0.0362548828125,
-0.037811279296875,
-0.0535888671875,
... |
hoaio/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-28T04:12:28.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | hoaio | null | null | hoaio/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-28T04:11:52 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 100.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hoaio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,750 | [
[
-0.04364013671875,
-0.039947509765625,
0.0196075439453125,
0.025604248046875,
-0.011383056640625,
-0.0184478759765625,
0.0101318359375,
-0.0129547119140625,
0.01276397705078125,
0.0231170654296875,
-0.0709228515625,
-0.03521728515625,
-0.0248870849609375,
-0... |
padmajabfrl/Religion-Classification-Custom-Model | 2023-06-28T08:35:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | padmajabfrl | null | null | padmajabfrl/Religion-Classification-Custom-Model | 0 | 2 | transformers | 2023-06-28T06:34:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Religion-Classification-Custom-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Religion-Classification-Custom-Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0141 | 1.0 | 5102 | 0.0026 | 0.9995 |
| 0.0033 | 2.0 | 10204 | 0.0015 | 0.9996 |
| 0.0024 | 3.0 | 15306 | 0.0021 | 0.9996 |
| 0.0005 | 4.0 | 20408 | 0.0007 | 0.9999 |
| 0.0013 | 5.0 | 25510 | 0.0009 | 0.9999 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,633 | [
[
-0.03515625,
-0.03607177734375,
0.0025348663330078125,
0.007049560546875,
-0.0208892822265625,
-0.005680084228515625,
-0.0066070556640625,
-0.013702392578125,
0.006488800048828125,
0.0195770263671875,
-0.044830322265625,
-0.05767822265625,
-0.0670166015625,
... |
wyklq/falcon-40b-gptq | 2023-06-28T11:08:48.000Z | [
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"text-generation-inference",
"region:us"
] | text-generation | wyklq | null | null | wyklq/falcon-40b-gptq | 2 | 2 | transformers | 2023-06-28T10:58:03 | Strict copy of https://huggingface.co/tiiuae/falcon-40b but quantized with GPTQ (on wikitext-2, 4bits, groupsize=128).
Intended to be used with https://github.com/huggingface/text-generation-inference
```
model=huggingface/falcon-40b-gptq
num_shard=2
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:0.8 --model-id $model --num-shard $num_shard --quantize gptq
```
For full configuration and usage outside docker, please refer to https://github.com/huggingface/text-generation-inference | 650 | [
[
-0.03338623046875,
-0.056671142578125,
0.045440673828125,
0.03369140625,
-0.0274200439453125,
-0.00832366943359375,
0.0138397216796875,
-0.0177459716796875,
0.0162506103515625,
0.036224365234375,
-0.041656494140625,
-0.01490020751953125,
-0.03131103515625,
-... |
catrabbitbear/pyramids1 | 2023-06-28T11:51:08.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | catrabbitbear | null | null | catrabbitbear/pyramids1 | 0 | 2 | ml-agents | 2023-06-28T11:51:06 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: catrabbitbear/pyramids1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,335 | [
[
-0.041290283203125,
-0.03533935546875,
0.0010175704956054688,
0.015106201171875,
-0.01018524169921875,
0.01256561279296875,
0.0171661376953125,
-0.0148162841796875,
0.0340576171875,
0.0306243896484375,
-0.040771484375,
-0.050018310546875,
-0.0297698974609375,
... |
sanchit-gandhi/whisper-medium-dv | 2023-06-29T09:40:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | sanchit-gandhi | null | null | sanchit-gandhi/whisper-medium-dv | 0 | 2 | transformers | 2023-06-28T14:02:16 | ---
language:
- dv
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: whisper-medium-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0 dv
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 8.957818965817019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-dv
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_13_0 dv dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2998
- Wer: 8.9578
To reproduce this run, execute the command in [`run.sh`](./run.sh). Note that you will require the DeepSpeed package, which can be pip installed with:
```
pip install --upgrade deepspeed
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0349 | 3.58 | 1000 | 0.1622 | 9.9437 |
| 0.0046 | 7.17 | 2000 | 0.2288 | 9.5090 |
| 0.0007 | 10.75 | 3000 | 0.2820 | 9.0952 |
| 0.0 | 14.34 | 4000 | 0.2998 | 8.9578 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1.dev0
- Tokenizers 0.13.3
| 2,215 | [
[
-0.0290679931640625,
-0.042999267578125,
0.01308441162109375,
0.0171661376953125,
-0.020782470703125,
-0.0207366943359375,
-0.01473236083984375,
-0.00250244140625,
0.0008869171142578125,
0.015838623046875,
-0.0635986328125,
-0.032012939453125,
-0.04888916015625,... |
numind/generic-sentiment_analysis-v1 | 2023-09-06T10:06:00.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentiment-analysis",
"text-classification",
"generic",
"sentiment-classification",
"en",
"dataset:Numind/C4_sentiment-analysis",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | numind | null | null | numind/generic-sentiment_analysis-v1 | 0 | 2 | transformers | 2023-06-28T15:42:10 | ---
license: mit
language:
- en
pipeline_tag: feature-extraction
tags:
- sentiment-analysis
- text-classification
- generic
- sentiment-classification
datasets:
- Numind/C4_sentiment-analysis
---
## Model
The base version of [e5-v2](https://huggingface.co/intfloat/e5-base-v2) finetunned on an annotated subset of [C4](https://huggingface.co/datasets/Numind/C4_sentiment-analysis). This model provides generic embedding for sentiment analysis. Embeddings can be used out of the box or fine-tuned on specific datasets.
Blog post: https://www.numind.ai/blog/creating-task-specific-foundation-models-with-gpt-4
## Usage
Below is an example to encode text and get embedding.
```python
import torch
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained("Numind/e5-base-sentiment_analysis")
tokenizer = AutoTokenizer.from_pretrained("Numind/e5-base-sentiment_analysis")
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
size = 256
text = "This movie is amazing"
encoding = tokenizer(
text,
truncation=True,
padding='max_length',
max_length= size,
)
emb = model(
torch.reshape(torch.tensor(encoding.input_ids),(1,len(encoding.input_ids))).to(device),output_hidden_states=True
).hidden_states[-1].cpu().detach()
embText = torch.mean(emb,axis = 1)
``` | 1,361 | [
[
-0.03369140625,
-0.043121337890625,
0.016326904296875,
0.01800537109375,
-0.038330078125,
-0.00812530517578125,
-0.006374359130859375,
-0.0029754638671875,
0.0165863037109375,
0.01654052734375,
-0.056121826171875,
-0.059326171875,
-0.050628662109375,
0.00152... |
EgilKarlsen/DistilRoberta_CSIC-Anomaly | 2023-08-15T02:01:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | EgilKarlsen | null | null | EgilKarlsen/DistilRoberta_CSIC-Anomaly | 0 | 2 | transformers | 2023-06-28T15:59:54 | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DistilRoBERTa_CSIC-Anomaly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilRoBERTa_CSIC-Anomaly
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0165
- Accuracy: 0.9959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1051 | 1.0 | 1094 | 0.0372 | 0.9916 |
| 0.0309 | 2.0 | 2188 | 0.0228 | 0.9942 |
| 0.021 | 3.0 | 3282 | 0.0216 | 0.9954 |
| 0.0156 | 4.0 | 4376 | 0.0143 | 0.9962 |
| 0.0141 | 5.0 | 5470 | 0.0144 | 0.9968 |
| 0.0115 | 6.0 | 6564 | 0.0153 | 0.9968 |
| 0.0092 | 7.0 | 7658 | 0.0174 | 0.9966 |
| 0.0067 | 8.0 | 8752 | 0.0143 | 0.9974 |
| 0.0049 | 9.0 | 9846 | 0.0150 | 0.998 |
| 0.0026 | 10.0 | 10940 | 0.0151 | 0.998 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,950 | [
[
-0.03143310546875,
-0.04302978515625,
0.01157379150390625,
0.018310546875,
-0.01361846923828125,
-0.01111602783203125,
0.00896453857421875,
-0.01010894775390625,
0.0154571533203125,
0.01363372802734375,
-0.0574951171875,
-0.0501708984375,
-0.063720703125,
-0... |
anwarrehman/simply-law-classify-v1.1 | 2023-06-28T18:41:16.000Z | [
"keras",
"region:us"
] | null | anwarrehman | null | null | anwarrehman/simply-law-classify-v1.1 | 0 | 2 | keras | 2023-06-28T18:40:09 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03759765625,
-0.0401611328125,
0.0321044921875,
0.007656097412109375,
-0.0433349609375,
-0.017974853515625,
0.01090240478515625,
-0.0037326812744140625,
0.020172119140625,
0.0307464599609375,
-0.043670654296875,
-0.051025390625,
-0.039306640625,
0.0002460... |
S3S3/ppo-Pyramids_Training1 | 2023-06-28T19:42:01.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | S3S3 | null | null | S3S3/ppo-Pyramids_Training1 | 0 | 2 | ml-agents | 2023-06-28T19:41:53 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-Pyramids_Training1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,339 | [
[
-0.039215087890625,
-0.03326416015625,
0.0022296905517578125,
0.01448822021484375,
-0.01104736328125,
0.012786865234375,
0.016845703125,
-0.014312744140625,
0.03326416015625,
0.029388427734375,
-0.04095458984375,
-0.04888916015625,
-0.0306243896484375,
-0.01... |
Eitanli/sentense_pairs_cls | 2023-10-10T15:54:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Eitanli | null | null | Eitanli/sentense_pairs_cls | 0 | 2 | transformers | 2023-06-28T19:53:40 | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: sentense_pairs_cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentense_pairs_cls
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.1 | 100 | 0.0202 | 0.9474 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| 1,241 | [
[
-0.0298919677734375,
-0.047210693359375,
0.0091705322265625,
0.0173187255859375,
-0.03460693359375,
-0.033843994140625,
-0.0191497802734375,
-0.017333984375,
0.018463134765625,
0.0299072265625,
-0.056365966796875,
-0.043212890625,
-0.041717529296875,
-0.0148... |
beamandym/bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos | 2023-06-29T03:43:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | beamandym | null | null | beamandym/bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos | 0 | 2 | transformers | 2023-06-29T01:25:16 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0717
- F1: 0.5857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9243 | 1.0 | 766 | 1.0143 | 0.5370 |
| 0.8299 | 2.0 | 1532 | 0.9847 | 0.5773 |
| 0.6513 | 3.0 | 2298 | 1.0717 | 0.5857 |
| 0.4954 | 4.0 | 3064 | 1.2263 | 0.5773 |
| 0.3879 | 5.0 | 3830 | 1.3412 | 0.5795 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,725 | [
[
-0.043243408203125,
-0.037994384765625,
0.00750732421875,
0.025604248046875,
-0.03302001953125,
-0.0208892822265625,
-0.0257720947265625,
-0.01256561279296875,
0.013885498046875,
0.0243377685546875,
-0.05780029296875,
-0.05303955078125,
-0.0435791015625,
-0.... |
r45289/finetuned-bert-chinese-base | 2023-06-29T07:54:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:peoples_daily_ner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | r45289 | null | null | r45289/finetuned-bert-chinese-base | 0 | 2 | transformers | 2023-06-29T03:04:31 | ---
tags:
- generated_from_trainer
datasets:
- peoples_daily_ner
metrics:
- f1
model-index:
- name: finetuned-bert-chinese-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: peoples_daily_ner
type: peoples_daily_ner
config: peoples_daily_ner
split: validation
args: peoples_daily_ner
metrics:
- name: F1
type: f1
value: 0.957080981756136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-chinese-base
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the peoples_daily_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0185
- F1: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0494 | 1.0 | 1739 | 0.0250 | 0.9283 |
| 0.0146 | 2.0 | 3478 | 0.0202 | 0.9505 |
| 0.0051 | 3.0 | 5217 | 0.0185 | 0.9571 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,775 | [
[
-0.033050537109375,
-0.0460205078125,
-0.0018262863159179688,
0.0198822021484375,
-0.028656005859375,
-0.040924072265625,
-0.02130126953125,
-0.0216064453125,
0.006732940673828125,
0.02825927734375,
-0.056793212890625,
-0.03985595703125,
-0.034759521484375,
... |
sleepynlp/dqn-SpaceInvadersNoFrameskip-v4-leo | 2023-06-29T04:39:10.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | sleepynlp | null | null | sleepynlp/dqn-SpaceInvadersNoFrameskip-v4-leo | 0 | 2 | stable-baselines3 | 2023-06-29T04:38:32 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 604.50 +/- 141.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sleepynlp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,762 | [
[
-0.039276123046875,
-0.0401611328125,
0.020111083984375,
0.0277862548828125,
-0.01296234130859375,
-0.0166168212890625,
0.004123687744140625,
-0.0158538818359375,
0.01560211181640625,
0.022857666015625,
-0.06890869140625,
-0.03460693359375,
-0.0240325927734375,
... |
jyarac/bert-base-multilingual-uncased-sentiment-MeIA | 2023-06-29T07:33:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jyarac | null | null | jyarac/bert-base-multilingual-uncased-sentiment-MeIA | 0 | 2 | transformers | 2023-06-29T04:43:23 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased-sentiment-MeIA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-MeIA
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0751
- eval_f1: 0.5932
- eval_runtime: 74.8554
- eval_samples_per_second: 70.135
- eval_steps_per_second: 2.204
- epoch: 4.0
- step: 1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,319 | [
[
-0.039306640625,
-0.04681396484375,
0.01166534423828125,
0.035675048828125,
-0.040863037109375,
-0.0164642333984375,
-0.0306396484375,
-0.01006317138671875,
0.023162841796875,
0.022125244140625,
-0.053375244140625,
-0.05242919921875,
-0.03961181640625,
-0.00... |
bh8648/distilbert-base-uncased-finetuned-emotion | 2023-07-14T00:40:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | bh8648 | null | null | bh8648/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-29T08:08:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264349572615436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3208 | 0.904 | 0.9017 |
| 0.5467 | 2.0 | 500 | 0.2204 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,842 | [
[
-0.03656005859375,
-0.0421142578125,
0.01435089111328125,
0.0230560302734375,
-0.02667236328125,
-0.0208282470703125,
-0.01348876953125,
-0.010498046875,
0.00902557373046875,
0.00848388671875,
-0.056304931640625,
-0.051177978515625,
-0.05950927734375,
-0.008... |
ENLP/mrasp2 | 2023-06-30T01:29:30.000Z | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"translation",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | ENLP | null | null | ENLP/mrasp2 | 0 | 2 | transformers | 2023-06-29T09:10:03 | ---
tags:
- translation
license: apache-2.0
metrics:
- bleu
- sacrebleu
---
## 一、项目介绍
此项目是参考github上优秀的机器翻译项目[mRASP2](https://github.com/PANXiao1994/mRASP2),将官方开源的fairseq预训练权重改写为transformers架构,使其能够更加方便使用。
## 二、使用方法
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = 'ENLP/mrasp2'
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
input_text = ["Welcome to download and use!"]
inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=1024, truncation=True)
result = model.generate(**inputs)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
result = [pre.strip() for pre in result]
# ['欢迎下载和使用!']
```
## 三、使用说明
该模型支持32种语言,更多详细参考[mRASP2](https://github.com/PANXiao1994/mRASP2),此模型库的tokenizer仅针对中英双语进行优化,如果需要使用其他语言请
自行参考tokenization_bat.py进行修改。请注意,这是官方的6e6d-no-mono模型,12e12d两个模型暂时无法实现,找不到原因,如果有知道的小伙伴可以分享出来。
## 四、其他模型
[ENLP/mrasp](https://huggingface.co/ENLP/mrasp) | 1,085 | [
[
-0.0257720947265625,
-0.0287017822265625,
-0.00370025634765625,
0.0267486572265625,
-0.033416748046875,
-0.01126861572265625,
0.001556396484375,
-0.0081787109375,
0.01192474365234375,
0.00972747802734375,
-0.047119140625,
-0.0234527587890625,
-0.062469482421875,... |
GabrielNewell/ppo-Huggy | 2023-06-29T11:42:04.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | GabrielNewell | null | null | GabrielNewell/ppo-Huggy | 0 | 2 | ml-agents | 2023-06-29T11:42:00 | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielNewell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,323 | [
[
-0.0416259765625,
-0.045318603515625,
0.0172119140625,
0.00321197509765625,
-0.01519775390625,
0.015289306640625,
0.0135955810546875,
-0.02276611328125,
0.041595458984375,
0.034210205078125,
-0.048736572265625,
-0.046417236328125,
-0.0304412841796875,
-0.017... |
sxandie/NER2.0.3-alpha_num_dataset | 2023-06-29T17:25:39.000Z | [
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | sxandie | null | null | sxandie/NER2.0.3-alpha_num_dataset | 0 | 2 | transformers | 2023-06-29T17:09:17 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/NER2.0.3-alpha_num_dataset
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/NER2.0.3-alpha_num_dataset
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3026
- Validation Loss: 0.2050
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29135, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3026 | 0.2050 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
| 1,627 | [
[
-0.040191650390625,
-0.03564453125,
0.0175628662109375,
-0.00036978721618652344,
-0.026336669921875,
-0.0294342041015625,
-0.01374053955078125,
-0.0190887451171875,
0.0040740966796875,
0.0109100341796875,
-0.053192138671875,
-0.041351318359375,
-0.05636596679687... |
IooHooI/my_awesome_qa_model | 2023-06-30T12:05:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:sberquad",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | IooHooI | null | null | IooHooI/my_awesome_qa_model | 0 | 2 | transformers | 2023-06-30T11:33:30 | ---
tags:
- generated_from_trainer
datasets:
- sberquad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) on the sberquad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4718 |
| 0.9921 | 2.0 | 500 | 2.7453 |
| 0.9921 | 3.0 | 750 | 2.9411 |
| 0.5693 | 4.0 | 1000 | 3.3692 |
| 0.5693 | 5.0 | 1250 | 3.4130 |
| 0.3076 | 6.0 | 1500 | 3.5991 |
| 0.3076 | 7.0 | 1750 | 4.0631 |
| 0.1596 | 8.0 | 2000 | 4.1718 |
| 0.1596 | 9.0 | 2250 | 4.3437 |
| 0.0984 | 10.0 | 2500 | 4.3730 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,775 | [
[
-0.032196044921875,
-0.050689697265625,
0.0111236572265625,
0.0007081031799316406,
-0.01812744140625,
-0.0202178955078125,
-0.005924224853515625,
-0.00815582275390625,
0.004009246826171875,
0.032958984375,
-0.05010986328125,
-0.044921875,
-0.0419921875,
-0.0... |
Qasim30/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-30T11:55:44.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Qasim30 | null | null | Qasim30/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-30T11:55:08 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 647.00 +/- 179.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Qasim30 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Qasim30 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Qasim30
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,756 | [
[
-0.042999267578125,
-0.0382080078125,
0.019866943359375,
0.0247039794921875,
-0.010894775390625,
-0.01727294921875,
0.01079559326171875,
-0.01152801513671875,
0.01221466064453125,
0.0226898193359375,
-0.07244873046875,
-0.0357666015625,
-0.024444580078125,
-... |
lnair/graphormer-ogbg-molhiv | 2023-06-30T15:03:49.000Z | [
"transformers",
"pytorch",
"graphormer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | lnair | null | null | lnair/graphormer-ogbg-molhiv | 0 | 2 | transformers | 2023-06-30T14:50:08 | ---
tags:
- generated_from_trainer
model-index:
- name: graphormer-ogbg-molhiv
results: []
---
# graphormer-ogbg-molhiv
This model was trained from `clefourrier/pcqm4mv2_graphormer_base` on the `OGB/ogbg-molhiv` dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0728
- eval_runtime: 464.6027
- eval_samples_per_second: 8.853
- eval_steps_per_second: 0.555
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 512
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 5120
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1,030 | [
[
-0.040618896484375,
-0.0450439453125,
0.025787353515625,
-0.0006527900695800781,
-0.0294189453125,
-0.0185546875,
-0.01053619384765625,
-0.00923919677734375,
-0.00679779052734375,
0.0286407470703125,
-0.04052734375,
-0.051055908203125,
-0.05670166015625,
-0.... |
amittian/setfit_ds_version_0_0_5 | 2023-06-30T15:17:25.000Z | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | amittian | null | null | amittian/setfit_ds_version_0_0_5 | 0 | 2 | sentence-transformers | 2023-06-30T15:17:08 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amittian/setfit_ds_version_0_0_5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amittian/setfit_ds_version_0_0_5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,553 | [
[
-0.00917816162109375,
-0.060943603515625,
0.0263671875,
-0.0157318115234375,
-0.01238250732421875,
-0.0184478759765625,
-0.01488494873046875,
-0.0083465576171875,
0.00406646728515625,
0.033172607421875,
-0.042755126953125,
-0.0198822021484375,
-0.04046630859375,... |
Ashish08/distilhubert-finetuned-gtzan | 2023-06-30T17:11:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | Ashish08 | null | null | Ashish08/distilhubert-finetuned-gtzan | 0 | 2 | transformers | 2023-06-30T15:35:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6606
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9739 | 1.0 | 113 | 1.7836 | 0.61 |
| 1.3684 | 2.0 | 226 | 1.2751 | 0.66 |
| 1.0257 | 3.0 | 339 | 0.9829 | 0.73 |
| 0.8737 | 4.0 | 452 | 0.8757 | 0.76 |
| 0.7396 | 5.0 | 565 | 0.6852 | 0.79 |
| 0.3946 | 6.0 | 678 | 0.6898 | 0.78 |
| 0.5027 | 7.0 | 791 | 0.6836 | 0.8 |
| 0.2452 | 8.0 | 904 | 0.5940 | 0.82 |
| 0.259 | 9.0 | 1017 | 0.6510 | 0.8 |
| 0.1723 | 10.0 | 1130 | 0.6606 | 0.81 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,232 | [
[
-0.036651611328125,
-0.040069580078125,
0.00980377197265625,
0.004238128662109375,
-0.0201416015625,
-0.018951416015625,
-0.00803375244140625,
-0.01519012451171875,
0.01413726806640625,
0.013092041015625,
-0.05291748046875,
-0.0482177734375,
-0.052154541015625,
... |
sharpbai/Wizard-Vicuna-13B-Uncensored-HF-onnx | 2023-07-01T04:40:39.000Z | [
"transformers",
"onnx",
"llama",
"text-generation",
"uncensored",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | sharpbai | null | null | sharpbai/Wizard-Vicuna-13B-Uncensored-HF-onnx | 2 | 2 | transformers | 2023-06-30T17:12:59 | ---
license: other
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
tags:
- uncensored
inference: true
---
# Wizard-Vicuna-13B-Uncensored-HF-onnx
A converted version of [TheBloke/Wizard-Vicuna-13B-Uncensored-HF](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF)
converted to ONNX fp16 using optimum library.
## Convert command
```bash
SAVE_DIR=/path/to/save
optimum-cli export onnx --model TheBloke/Wizard-Vicuna-13B-Uncensored-HF --task causal-lm-with-past --fp16 --device cuda $SAVE_DIR
rm $SAVE_DIR/Constant_*
rm $SAVE_DIR/decoder_with_past_model.onnx*
rm $SAVE_DIR/decoder_model.onnx*
```
## Usage
First load the onnx model using ORTModelForCausalLM
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from optimum.onnxruntime import ORTModelForCausalLM
BASE_MODEL = "sharpbai/Wizard-Vicuna-13B-Uncensored-HF-onnx"
tok = AutoTokenizer.from_pretrained(BASE_MODEL, use_fast=False)
model = ORTModelForCausalLM.from_pretrained(BASE_MODEL,
provider='CUDAExecutionProvider',
torch_dtype=torch.float16)
streamer = TextStreamer(tok)
```
Then you can generate code
```python
from datetime import datetime
MAX_NEW_TOKENS=200
inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
time = datetime.now()
# Despite returning the usual output, the streamer will also print the generated text to stdout.
_ = model.generate(input_ids=inputs.input_ids.to('cuda:0'), streamer=streamer, max_new_tokens=MAX_NEW_TOKENS)
elapsed = datetime.now() - time
speed = MAX_NEW_TOKENS / elapsed.total_seconds()
print(f"elapsed {elapsed}, speed {speed} token/s")
```
You can compare onnx with transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
BASE_MODEL = "TheBloke/Wizard-Vicuna-13B-Uncensored-HF"
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, device_map='auto',
torch_dtype=torch.float16)
tok = AutoTokenizer.from_pretrained(BASE_MODEL, use_fast=False)
streamer = TextStreamer(tok)
```
I have done some tests in this notebook
https://colab.research.google.com/gist/sharpbai/745fa7c6b2069544c254b1fb73070698/infer-with-onnxruntime-vs-transformers-llama-13b.ipynb
## Original model card
-----------------------------------------
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Wizard-Vicuna-13B-Uncensored float16 HF
This is a float16 HF repo for [Eric Hartford's 'uncensored' training of Wizard-Vicuna 13B](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored).
It is the result of converting Eric's float32 repo to float16 for easier storage and use.
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML).
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
| 6,010 | [
[
-0.0298004150390625,
-0.052459716796875,
0.007244110107421875,
0.0143280029296875,
-0.021331787109375,
-0.0200042724609375,
-0.0026874542236328125,
-0.02972412109375,
0.0169525146484375,
0.0287628173828125,
-0.048187255859375,
-0.0281524658203125,
-0.03176879882... |
aroot/mbart-finetuned-eng-ind-184485140105 | 2023-06-30T22:17:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | aroot | null | null | aroot/mbart-finetuned-eng-ind-184485140105 | 0 | 2 | transformers | 2023-06-30T21:59:09 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-184485140105
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-184485140105
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8577
- Bleu: 20.4223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,205 | [
[
-0.04425048828125,
-0.048370361328125,
0.017181396484375,
0.0173492431640625,
-0.02984619140625,
-0.037750244140625,
-0.0164642333984375,
-0.01076507568359375,
0.012054443359375,
0.02593994140625,
-0.0550537109375,
-0.0303497314453125,
-0.04705810546875,
-0.... |
anas21/keras-demo | 2023-06-30T22:08:24.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/keras-demo | 0 | 2 | keras | 2023-06-30T22:08:22 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.00814056396484375,
-0.043243408203125,
-0.0176849365234375,
0.01097869873046875,
-0.0033721923828125,
0.020477294921875,
0.030517578125,
-0.043792724609375,
-0.051239013671875,
-0.040008544921875,
0... |
anas21/keras-dummy-functional | 2023-06-30T22:09:10.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/keras-dummy-functional | 0 | 2 | keras | 2023-06-30T22:09:09 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.00814056396484375,
-0.043243408203125,
-0.0176849365234375,
0.01097869873046875,
-0.0033721923828125,
0.020477294921875,
0.030517578125,
-0.043792724609375,
-0.051239013671875,
-0.040008544921875,
0... |
anas21/autoencoder-keras-mnist-demo | 2023-06-30T22:29:15.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/autoencoder-keras-mnist-demo | 0 | 2 | keras | 2023-06-30T22:27:58 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.03814697265625,
-0.03948974609375,
0.03131103515625,
0.00818634033203125,
-0.04400634765625,
-0.0181884765625,
0.0099639892578125,
-0.00421142578125,
0.0196533203125,
0.0294952392578125,
-0.0443115234375,
-0.051788330078125,
-0.04022216796875,
0.001031875... |
cbdb/ClassicalChineseOfficeTitleTranslation | 2023-06-30T23:10:16.000Z | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"Seq2SeqLM",
"古文",
"文言文",
"中国古代官职翻译",
"ancient",
"classical",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | cbdb | null | null | cbdb/ClassicalChineseOfficeTitleTranslation | 3 | 2 | transformers | 2023-06-30T22:28:37 | ---
language:
- zh
tags:
- Seq2SeqLM
- 古文
- 文言文
- 中国古代官职翻译
- ancient
- classical
license: cc-by-nc-sa-4.0
metrics:
- sacrebleu
---
# <font color="IndianRed"> TITO (Classical Chinese Office Title Translation)</font>
[](https://colab.research.google.com/drive/1UoG3QebyBlK6diiYckiQv-5dRB9dA4iv?usp=sharing/)
Our model <font color="cornflowerblue">TITO (Classical Chinese Office Title Translation) </font> is a Sequence to Sequence Classical Chinese language model that is intended to <font color="IndianRed">translate a Classical Chinese office title into English</font>. This model is first inherited from the MarianMTModel, and finetuned using a 6,208 high-quality translation pairs collected CBDB group (China Biographical Database).
### <font color="IndianRed"> How to use </font>
Here is how to use this model to get the features of a given text in PyTorch:
<font color="cornflowerblue"> 1. Import model and packages </font>
```python
from transformers import MarianMTModel, MarianTokenizer
device = torch.device('cuda')
model_name = 'cbdb/ClassicalChineseOfficeTitleTranslation'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name).to(device)
```
<font color="cornflowerblue"> 2. Load Data </font>
```python
# Load your data here
tobe_translated = ['講筵官','判司簿尉','散騎常侍','殿中省尚輦奉御']
```
<font color="cornflowerblue"> 3. Make a prediction </font>
```python
inputs = tokenizer(tobe_translated, return_tensors="pt", padding=True).to(device)
translated = model.generate(**inputs, max_length=128)
tran = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
for c, t in zip(tobe_translated, tran):
print(f'{c}: {t}')
```
講筵官: Lecturer<br>
判司簿尉: Supervisor of the Commandant of Records<br>
散騎常侍: Policy Advisor<br>
殿中省尚輦奉御: Chief Steward of the Palace Administration<br>
### <font color="IndianRed">Authors </font>
Queenie Luo (queenieluo[at]g.harvard.edu)
<br>
Hongsu Wang
<br>
Peter Bol
<br>
CBDB Group
### <font color="IndianRed">License </font>
Copyright (c) 2023 CBDB
Except where otherwise noted, content on this repository is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or
send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. | 2,472 | [
[
-0.01192474365234375,
-0.039764404296875,
0.01073455810546875,
0.0259552001953125,
-0.048492431640625,
-0.017852783203125,
-0.0257110595703125,
-0.0187530517578125,
0.0190582275390625,
0.036376953125,
-0.0259552001953125,
-0.049285888671875,
-0.051910400390625,
... |
anas21/mnist-demo | 2023-06-30T22:55:56.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/mnist-demo | 0 | 2 | keras | 2023-06-30T22:55:00 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03863525390625,
-0.039642333984375,
0.031402587890625,
0.007678985595703125,
-0.044158935546875,
-0.0184783935546875,
0.00991058349609375,
-0.004512786865234375,
0.0194091796875,
0.0297088623046875,
-0.044219970703125,
-0.051605224609375,
-0.03955078125,
... |
anas21/mnist-wdemo | 2023-06-30T23:12:51.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/mnist-wdemo | 0 | 2 | keras | 2023-06-30T23:11:56 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03863525390625,
-0.039642333984375,
0.03143310546875,
0.007656097412109375,
-0.044158935546875,
-0.0184783935546875,
0.009918212890625,
-0.00453948974609375,
0.0194244384765625,
0.0297088623046875,
-0.044219970703125,
-0.051605224609375,
-0.03955078125,
0... |
anas21/t1 | 2023-07-01T00:39:02.000Z | [
"keras",
"region:us"
] | null | anas21 | null | null | anas21/t1 | 0 | 2 | keras | 2023-07-01T00:38:01 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.03863525390625,
-0.039642333984375,
0.03143310546875,
0.007656097412109375,
-0.044158935546875,
-0.0184783935546875,
0.009918212890625,
-0.00453948974609375,
0.0194244384765625,
0.0297088623046875,
-0.044219970703125,
-0.051605224609375,
-0.03955078125,
0... |
DRAGOO/Speech10 | 2023-07-01T06:44:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | DRAGOO | null | null | DRAGOO/Speech10 | 0 | 2 | transformers | 2023-07-01T01:24:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Speech10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0207
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.919 | 3.96 | 1000 | 2.9089 | 1 |
| 2.8706 | 7.92 | 2000 | 2.8730 | 1 |
| 2.8394 | 11.88 | 3000 | 2.8621 | 1 |
| 2.5675 | 15.84 | 4000 | 2.5216 | 1 |
| 2.1256 | 19.8 | 5000 | 2.0207 | 1 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,655 | [
[
-0.02825927734375,
-0.045806884765625,
0.000942230224609375,
0.0156707763671875,
-0.01444244384765625,
-0.0302886962890625,
-0.020538330078125,
-0.0201568603515625,
0.00928497314453125,
0.025054931640625,
-0.061248779296875,
-0.047607421875,
-0.04461669921875,
... |
Junlaii/bart_4acticle_abstract | 2023-07-01T10:22:56.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bart",
"text2text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Junlaii | null | null | Junlaii/bart_4acticle_abstract | 0 | 2 | transformers | 2023-07-01T09:49:49 | ---
license: other
---
此次模型是为北航软件nlp课程的作业,为文章摘要而训练的模型。因为模型较大,故上传至huggingface仓库
This model was trained for the article abstract as an assignment for the NLP course of Beihang Software. Due to the large size of the model, it was uploaded to the huggingface warehouse | 266 | [
[
-0.0309295654296875,
-0.049072265625,
-0.0006151199340820312,
0.058074951171875,
-0.031158447265625,
-0.0316162109375,
0.002735137939453125,
-0.058990478515625,
0.041107177734375,
0.0419921875,
-0.0330810546875,
-0.034423828125,
-0.052093505859375,
0.0126571... |
Tubido/ppo-LunarLander-v2 | 2023-07-01T16:47:29.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Tubido | null | null | Tubido/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-01T15:11:02 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.71 +/- 19.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
Xenova/multilingual-e5-small | 2023-07-01T15:55:18.000Z | [
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"region:us"
] | feature-extraction | Xenova | null | null | Xenova/multilingual-e5-small | 0 | 2 | transformers.js | 2023-07-01T15:54:53 | ---
library_name: "transformers.js"
---
https://huggingface.co/intfloat/multilingual-e5-small with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 509 | [
[
-0.03466796875,
0.019866943359375,
0.0248565673828125,
0.05267333984375,
-0.01184844970703125,
0.0012941360473632812,
-0.01520538330078125,
-0.02813720703125,
0.0308837890625,
0.03192138671875,
-0.055084228515625,
-0.0321044921875,
-0.045196533203125,
0.0237... |
MuGeminorum/gpt2-abcmusic | 2023-07-03T08:17:17.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"music",
"art",
"text-classification",
"en",
"dataset:sander-wood/wikimusictext",
"dataset:sander-wood/massive_abcnotation_dataset",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | MuGeminorum | null | null | MuGeminorum/gpt2-abcmusic | 1 | 2 | transformers | 2023-07-01T16:33:05 | ---
license: mit
datasets:
- sander-wood/wikimusictext
- sander-wood/massive_abcnotation_dataset
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- music
- art
---
# GPT-2 pre-trained with abc music | 225 | [
[
-0.0297393798828125,
-0.01378631591796875,
0.01947021484375,
0.0303192138671875,
-0.045135498046875,
-0.007610321044921875,
0.005947113037109375,
-0.01922607421875,
-0.009002685546875,
0.0199432373046875,
-0.06549072265625,
-0.0198516845703125,
-0.06130981445312... |
justinhoang/a2c-AntBulletEnv-v0 | 2023-07-26T08:17:57.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | justinhoang | null | null | justinhoang/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-01T20:17:25 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1073.85 +/- 270.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
Johnnyfans/normpulse | 2023-07-02T05:18:31.000Z | [
"transformers",
"pytorch",
"PULSE",
"NormPULSE",
"llm",
"zh",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | null | Johnnyfans | null | null | Johnnyfans/normpulse | 2 | 2 | transformers | 2023-07-01T20:56:55 | ---
license: agpl-3.0
language:
- zh
tags:
- PULSE
- NormPULSE
- llm
---
# NormPULSE
[](https://github.com/openmedlab/PULSE/blob/main/LICENSE)
[](https://github.com/openmedlab/PULSE/blob/main/MODEL_LICENSE)
### 下载使用
```bash
git lfs install
git clone https://huggingface.co/Johnnyfans/normpulse
git clone https://github.com/JOHNNY-fans/NormPULSE
```
## 致谢
- 上海人工智能实验室
- 上海交通大学-清源研究院
- 华东理工大学-自然语言处理与大数据挖掘实验室
## 开源协议
本项目所含代码采用[Apache 2.0](https://github.com/openmedlab/PULSE/blob/main/LICENSE)协议,模型权重采用[GNU AGPL 3.0](https://github.com/openmedlab/PULSE/blob/main/MODEL_LICENSE)协议。如使用本项目所含模型及其修改版本提供服务产生误导性或有害性言论,造成不良影响,由服务提供方负责,与本项目无关。
| 816 | [
[
-0.0482177734375,
-0.043365478515625,
0.026397705078125,
0.033721923828125,
-0.047576904296875,
-0.00998687744140625,
-0.01061248779296875,
-0.021697998046875,
0.01030731201171875,
0.00931549072265625,
-0.0399169921875,
-0.05743408203125,
-0.0477294921875,
0... |
redjohn08/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-01T21:54:15.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | redjohn08 | null | null | redjohn08/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-01T21:53:37 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 705.00 +/- 107.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga redjohn08 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga redjohn08 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga redjohn08
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,762 | [
[
-0.044219970703125,
-0.039703369140625,
0.0192718505859375,
0.0252838134765625,
-0.01123809814453125,
-0.018585205078125,
0.0092926025390625,
-0.01404571533203125,
0.01398468017578125,
0.0224456787109375,
-0.07244873046875,
-0.035400390625,
-0.025421142578125,
... |
RyanOrth/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-02T01:03:31.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | RyanOrth | null | null | RyanOrth/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-02T01:02:51 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 609.50 +/- 287.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RyanOrth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RyanOrth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RyanOrth
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,759 | [
[
-0.04302978515625,
-0.039337158203125,
0.0202789306640625,
0.02593994140625,
-0.01065826416015625,
-0.0174713134765625,
0.010498046875,
-0.012847900390625,
0.01268768310546875,
0.0225830078125,
-0.07244873046875,
-0.035003662109375,
-0.0250091552734375,
-0.0... |
tielupeng/xlm-roberta-base-finetuned-marc | 2023-07-02T06:09:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | tielupeng | null | null | tielupeng/xlm-roberta-base-finetuned-marc | 0 | 2 | transformers | 2023-07-02T05:59:48 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9367
- Mae: 0.4425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.159 | 1.0 | 308 | 1.0342 | 0.4956 |
| 0.9878 | 2.0 | 616 | 0.9367 | 0.4425 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,423 | [
[
-0.03790283203125,
-0.04693603515625,
0.0250091552734375,
0.01105499267578125,
-0.0235748291015625,
-0.02862548828125,
-0.0190887451171875,
-0.01372528076171875,
0.0008172988891601562,
0.046142578125,
-0.060302734375,
-0.04443359375,
-0.05694580078125,
-0.00... |
PhongLe1311/bert-finetuned-squad-final | 2023-07-02T12:00:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | PhongLe1311 | null | null | PhongLe1311/bert-finetuned-squad-final | 0 | 2 | transformers | 2023-07-02T10:36:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad-final
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,036 | [
[
-0.04364013671875,
-0.051177978515625,
0.01111602783203125,
0.0190582275390625,
-0.02978515625,
-0.010772705078125,
-0.014007568359375,
-0.0175018310546875,
0.007568359375,
0.027801513671875,
-0.07635498046875,
-0.035369873046875,
-0.036834716796875,
-0.0069... |
Talha185/bert-finetuned-ner | 2023-07-02T18:00:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | Talha185 | null | null | Talha185/bert-finetuned-ner | 0 | 2 | transformers | 2023-07-02T16:48:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9333994381094034
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9418827649462187
- name: Accuracy
type: accuracy
value: 0.9863572143403779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0633
- Precision: 0.9334
- Recall: 0.9505
- F1: 0.9419
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0898 | 1.0 | 1756 | 0.0804 | 0.9184 | 0.9303 | 0.9243 | 0.9805 |
| 0.0346 | 2.0 | 3512 | 0.0650 | 0.9305 | 0.9512 | 0.9407 | 0.9863 |
| 0.0177 | 3.0 | 5268 | 0.0633 | 0.9334 | 0.9505 | 0.9419 | 0.9864 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,194 | [
[
-0.041748046875,
-0.0467529296875,
0.01050567626953125,
0.01128387451171875,
-0.0250091552734375,
-0.036468505859375,
-0.016143798828125,
-0.0160369873046875,
0.01129913330078125,
0.023345947265625,
-0.05865478515625,
-0.041961669921875,
-0.0479736328125,
-0... |
michaelscheinfeild/vit-base-patch16-224-finetuned-flower | 2023-07-02T18:21:11.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | michaelscheinfeild | null | null | michaelscheinfeild/vit-base-patch16-224-finetuned-flower | 0 | 2 | transformers | 2023-07-02T18:00:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
| 1,119 | [
[
-0.0307769775390625,
-0.06048583984375,
0.007091522216796875,
0.0204315185546875,
-0.030059814453125,
-0.038909912109375,
-0.01384735107421875,
-0.0225677490234375,
0.00707244873046875,
0.0233001708984375,
-0.057830810546875,
-0.036407470703125,
-0.0434265136718... |
hopkins/mbart-finetuned-eng-ind-34784827403 | 2023-07-02T18:28:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | hopkins | null | null | hopkins/mbart-finetuned-eng-ind-34784827403 | 0 | 2 | transformers | 2023-07-02T18:14:22 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-34784827403
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-34784827403
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7658
- Bleu: 21.7696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,248 | [
[
-0.045501708984375,
-0.049163818359375,
0.017791748046875,
0.014556884765625,
-0.0298309326171875,
-0.037139892578125,
-0.01812744140625,
-0.011749267578125,
0.0136566162109375,
0.025390625,
-0.055755615234375,
-0.0306854248046875,
-0.044891357421875,
-0.007... |
SwampMan/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-03T20:37:38.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | SwampMan | null | null | SwampMan/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-02T21:25:57 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 591.00 +/- 154.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SwampMan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SwampMan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SwampMan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,760 | [
[
-0.04400634765625,
-0.037841796875,
0.020538330078125,
0.0246734619140625,
-0.01209259033203125,
-0.015289306640625,
0.0089569091796875,
-0.01064300537109375,
0.01210784912109375,
0.02716064453125,
-0.06805419921875,
-0.034088134765625,
-0.0255584716796875,
... |
hopkins/mbart-finetuned-eng-kor-24 | 2023-07-02T23:25:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | hopkins | null | null | hopkins/mbart-finetuned-eng-kor-24 | 0 | 2 | transformers | 2023-07-02T23:07:23 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-24
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9896
- Bleu: 7.0455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,229 | [
[
-0.045867919921875,
-0.05242919921875,
0.015899658203125,
0.01534271240234375,
-0.0298004150390625,
-0.034912109375,
-0.0190582275390625,
-0.0130157470703125,
0.0134735107421875,
0.0262451171875,
-0.058563232421875,
-0.033050537109375,
-0.043792724609375,
-0... |
jncraton/fastchat-t5-3b-v1.0-ct2-int8 | 2023-07-03T02:24:58.000Z | [
"transformers",
"license:apache-2.0",
"has_space",
"region:us"
] | null | jncraton | null | null | jncraton/fastchat-t5-3b-v1.0-ct2-int8 | 2 | 2 | transformers | 2023-07-03T01:59:59 | ---
license: apache-2.0
inference: false
---
# FastChat-T5 Model Card
## Model details
**Model type:**
FastChat-T5 is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from ShareGPT.
It is based on an encoder-decoder transformer architecture, and can autoregressively generate responses to users' inputs.
**Model date:**
FastChat-T5 was trained on April 2023.
**Organizations developing the model:**
The FastChat developers, primarily Dacheng Li, Lianmin Zheng and Hao Zhang.
**Paper or resources for more information:**
https://github.com/lm-sys/FastChat#FastChat-T5
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of FastChat-T5 is the commercial usage of large language models and chatbots. It can also be used for research purposes.
**Primary intended users:**
The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Training details
It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations between the user and the ChatGPT are processed as the question.
The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation while generating an answer uni-directionally from a start token.
This model is fine-tuned for 3 epochs, with a max learning rate 2e-5, warmup ratio 0.03, and a cosine learning rate schedule.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
| 1,984 | [
[
-0.0307159423828125,
-0.059112548828125,
0.0145416259765625,
0.01515960693359375,
-0.03350830078125,
0.0138702392578125,
0.00140380859375,
-0.0430908203125,
0.00266265869140625,
0.0194244384765625,
-0.046356201171875,
-0.0487060546875,
-0.036224365234375,
-0... |
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.1 | 2023-07-03T03:29:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | chriskim2273 | null | null | chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.1 | 0 | 2 | transformers | 2023-07-03T03:26:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 37 | 0.7508 |
| No log | 2.0 | 74 | 0.4030 |
| No log | 3.0 | 111 | 0.3860 |
| No log | 4.0 | 148 | 0.4186 |
| No log | 5.0 | 185 | 0.4259 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,525 | [
[
-0.0196685791015625,
-0.036956787109375,
0.009307861328125,
0.01114654541015625,
-0.03497314453125,
-0.0166778564453125,
0.002529144287109375,
-0.0216064453125,
-0.0002505779266357422,
0.017425537109375,
-0.059783935546875,
-0.04168701171875,
-0.043304443359375,... |
veluchs/whisper-small-dv | 2023-07-03T06:48:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | veluchs | null | null | veluchs/whisper-small-dv | 0 | 2 | transformers | 2023-07-03T05:21:24 | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: 'Whisper Small - Dhivehi '
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.509754146816427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small - Dhivehi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Wer Ortho: 62.8665
- Wer: 13.5098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1243 | 1.63 | 500 | 0.1709 | 62.8665 | 13.5098 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,826 | [
[
-0.0287628173828125,
-0.034332275390625,
0.0112457275390625,
0.00439453125,
-0.0308685302734375,
-0.04180908203125,
-0.023773193359375,
-0.0241241455078125,
0.00777435302734375,
0.01500701904296875,
-0.061676025390625,
-0.0333251953125,
-0.04168701171875,
-0... |
devasheeshG/whisper_large_v2_fp16_transformers | 2023-07-11T21:18:21.000Z | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"speech",
"wav2vec2",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"... | automatic-speech-recognition | devasheeshG | null | null | devasheeshG/whisper_large_v2_fp16_transformers | 1 | 2 | transformers | 2023-07-03T08:21:43 | ---
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- pytorch
- audio
- speech
- automatic-speech-recognition
- whisper
- wav2vec2
model-index:
- name: whisper_large_v2_fp16_transformers
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: librispeech_asr
name: LibriSpeech (clean)
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 0
name: Test WER
description: Word Error Rate
- type: mer
value: 0
name: Test MER
description: Match Error Rate
- type: wil
value: 0
name: Test WIL
description: Word Information Lost
- type: wip
value: 0
name: Test WIP
description: Word Information Preserved
- type: cer
value: 0
name: Test CER
description: Character Error Rate
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: librispeech_asr
name: LibriSpeech (other)
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 0
name: Test WER
description: Word Error Rate
- type: mer
value: 0
name: Test MER
description: Match Error Rate
- type: wil
value: 0
name: Test WIL
description: Word Information Lost
- type: wip
value: 0
name: Test WIP
description: Word Information Preserved
- type: cer
value: 0
name: Test CER
description: Character Error Rate
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: mozilla-foundation/common_voice_14_0
name: Common Voice (14.0) (Hindi)
config: hi
split: test
args:
language: hi
metrics:
- type: wer
value: 44.64
name: Test WER
description: Word Error Rate
- type: mer
value: 41.69
name: Test MER
description: Match Error Rate
- type: wil
value: 59.53
name: Test WIL
description: Word Information Lost
- type: wip
value: 40.46
name: Test WIP
description: Word Information Preserved
- type: cer
value: 16.80
name: Test CER
description: Character Error Rate
widget:
- example_title: Hinglish Sample
src: https://huggingface.co/devasheeshG/whisper_large_v2_fp16_transformers/resolve/main/test.wav
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- "no"
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
---
## Versions:
- CUDA: 12.1
- cuDNN Version: 8.9.2.26_1.0-1_amd64
* tensorflow Version: 2.12.0
* torch Version: 2.1.0.dev20230606+cu12135
* transformers Version: 4.30.2
* accelerate Version: 0.20.3
## Model Benchmarks:
- RAM: 3 GB (Original_Model: 6GB)
- VRAM: 3.7 GB (Original_Model: 11GB)
- test.wav: 23 s (Multilingual Speech i.e. English+Hindi)
- **Time in seconds for Processing by each device**
| Device Name | float32 (Original) | float16 | CudaCores | TensorCores |
| ----------------- | ------------------ | ------- | --------- | ----------- |
| 3060 | 2.2 | 1.3 | 3,584 | 112 |
| 1660 Super | OOM | 6 | 1,408 | N/A |
| Collab (Tesla T4) | - | - | 2,560 | 320 |
| Collab (CPU) | - | N/A | N/A | N/A |
| M1 (CPU) | - | - | N/A | N/A |
| M1 (GPU -> 'mps') | - | - | N/A | N/A |
- **NOTE: TensorCores are efficient in mixed-precision calculations**
- **CPU -> torch.float16 not supported on CPU (AMD Ryzen 5 3600 or Collab CPU)**
- Punchuation: Sometimes False ('I don't know the exact reason why this is happening')
## Model Error Benchmarks:
- **WER: Word Error Rate**
- **MER: Match Error Rate**
- **WIL: Word Information Lost**
- **WIP: Word Information Preserved**
- **CER: Character Error Rate**
### Hindi to Hindi (test.tsv) [Common Voice 14.0](https://commonvoice.mozilla.org/en/datasets)
**Test done on RTX 3060 on 1000 Samples**
| | WER | MER | WIL | WIP | CER |
| ----------------------- | ----- | ----- | ----- | ----- | ----- |
| Original_Model (30 min) | 43.99 | 41.65 | 59.47 | 40.52 | 16.23 |
| This_Model (20 min) | 44.64 | 41.69 | 59.53 | 40.46 | 16.80 |
### Hindi to English (test.csv) [Custom Dataset](https://huggingface.co/datasets/devasheeshG/common_voices_14_0_hi2en_hi2hi)
**Test done on RTX 3060 on 1000 Samples**
| | WER | MER | WIL | WIP | CER |
| ----------------------- | --- | --- | --- | --- | --- |
| Original_Model (30 min) | - | - | - | - | - |
| This_Model (20 min) | - | - | - | - | - |
### English ([LibriSpeech](https://huggingface.co/datasets/librispeech_asr) -> test-clean)
**Test done on RTX 3060 on \_\_\_ Samples**
| | WER | MER | WIL | WIP | CER |
| -------------- | --- | --- | --- | --- | --- |
| Original_Model | - | - | - | - | - |
| This_Model | - | - | - | - | - |
### English ([LibriSpeech](https://huggingface.co/datasets/librispeech_asr) -> test-other)
**Test done on RTX 3060 on \_\_\_ Samples**
| | WER | MER | WIL | WIP | CER |
| -------------- | --- | --- | --- | --- | --- |
| Original_Model | - | - | - | - | - |
| This_Model | - | - | - | - | - |
- **'jiwer' library is used for calculations**
## Code for conversion:
- ### [Will be soon Uploaded on Github](https://github.com/devasheeshG)
## Usage
A file `__init__.py` is contained inside this repo which contains all the code to use this model.
Firstly, clone this repo and place all the files inside a folder.
### Make sure you have git-lfs installed (https://git-lfs.com)
```bash
git lfs install
git clone https://huggingface.co/devasheeshG/whisper_large_v2_fp16_transformers
```
**Please try in jupyter notebook**
```python
# Import the Model
from whisper_large_v2_fp16_transformers import Model, load_audio, pad_or_trim
```
```python
# Initilise the model
model = Model(
model_name_or_path='whisper_large_v2_fp16_transformers',
cuda_visible_device="0",
device='cuda',
)
```
```python
# Load Audio
audio = load_audio('whisper_large_v2_fp16_transformers/test.wav')
audio = pad_or_trim(audio)
```
```python
# Transcribe (First transcription takes time)
model.transcribe(audio)
```
## Credits
It is fp16 version of ``openai/whisper-large-v2``
| 8,046 | [
[
-0.029937744140625,
-0.06549072265625,
0.0165863037109375,
0.0204925537109375,
-0.0168304443359375,
-0.01031494140625,
-0.025787353515625,
-0.0311431884765625,
0.003307342529296875,
0.01317596435546875,
-0.047119140625,
-0.03509521484375,
-0.042144775390625,
... |
zijun/autotrain-input_list-71788138727 | 2023-07-03T11:19:37.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:zijun/autotrain-data-input_list",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | zijun | null | null | zijun/autotrain-input_list-71788138727 | 0 | 2 | transformers | 2023-07-03T11:19:08 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- zijun/autotrain-data-input_list
co2_eq_emissions:
emissions: 0.20160817247860105
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 71788138727
- CO2 Emissions (in grams): 0.2016
## Validation Metrics
- Loss: 0.261
- Accuracy: 0.882
- Precision: 0.926
- Recall: 0.926
- AUC: 0.931
- F1: 0.926
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zijun/autotrain-input_list-71788138727
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,126 | [
[
-0.033782958984375,
-0.0247344970703125,
0.01001739501953125,
0.007476806640625,
-0.0063934326171875,
-0.005046844482421875,
0.003841400146484375,
-0.017578125,
-0.0021514892578125,
0.008941650390625,
-0.055389404296875,
-0.036834716796875,
-0.06134033203125,
... |
hopkins/mbart-finetuned-eng-deu-longest | 2023-07-03T12:25:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | hopkins | null | null | hopkins/mbart-finetuned-eng-deu-longest | 0 | 2 | transformers | 2023-07-03T12:06:22 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6322
- Bleu: 20.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,240 | [
[
-0.049102783203125,
-0.053436279296875,
0.024017333984375,
0.01666259765625,
-0.0345458984375,
-0.031585693359375,
-0.0300445556640625,
-0.01506805419921875,
0.00788116455078125,
0.021270751953125,
-0.053558349609375,
-0.039154052734375,
-0.047607421875,
0.0... |
dfurman/mpt-7b-dolphin | 2023-07-27T21:29:27.000Z | [
"transformers",
"pytorch",
"safetensors",
"mpt",
"text-generation",
"MosaicML",
"custom_code",
"dataset:ehartford/dolphin",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | dfurman | null | null | dfurman/mpt-7b-dolphin | 12 | 2 | transformers | 2023-07-03T13:42:36 | ---
license: apache-2.0
datasets:
- ehartford/dolphin
tags:
- MosaicML
inference: false
---
# mpt-7b-dolphin 🐬
This instruction model was built via full parameter finetuning of [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) on the first 100k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 12 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
* Model license: Apache 2.0
* Basic usage: [notebook](assets/basic_inference_mpt-7b-dolphin.ipynb)
* Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/mpt/dolphin/sft-mpt-7b-dolphin.py)
* Loss curves: [plot](https://huggingface.co/dfurman/mpt-7b-dolphin#finetuning-description)
* Runtime stats: [table](https://huggingface.co/dfurman/mpt-7b-dolphin#runtime-tests)
### Example prompts and responses
Example 1:
**User**:
>You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
**mpt-7b-dolphin**:
> 1. Visit the Statue of Liberty: The Statue of Liberty is a symbol of freedom and democracy, and it is a must-see attraction in New York City.
> 2. Take a walk in Central Park: Central Park is a massive green space in the heart of Manhattan, and it is a great place to relax, enjoy nature, and people-watch.
> 3. Visit the Metropolitan Museum of Art: The Met is one of the world's largest and most renowned art museums, housing an extensive collection of art from around the world.
> 4. Take a stroll through Times Square: Times Square is the heart of the city, and it is a must-see for its iconic billboards, bright lights, and bustling energy.
> 5. Visit the Empire State Building: The Empire State Building is a famous skyscraper in Midtown Manhattan, offering panoramic views of the city from its observation deck.
> 6. Explore the Museum of Modern Art (MoMA): MoMA is a renowned art museum in Midtown Manhattan, showcasing modern and contemporary art from around the world.
> 7. Visit the 9/11 Memorial and Museum: The 9/11 Memorial and Museum is a somber but important attraction in Lower Manhattan, commemorating the victims of the September 11th attacks.
> 8. Take a ferry ride to the Statue of Liberty: A ferry ride to the Statue of Liberty is a great way to see the city from the water
<br>
Example 2:
**User**:
>You are a helpful assistant. Daniel is in need of a haircut. His barber works Mondays, Wednesdays, and Fridays. So, Daniel went in for a haircut on Sunday. Does this make logical sense? Let's work this out in a step by step fashion to make sure we have the right answer.\n
**mpt-7b-dolphin**:
> Step 1: Identify the days of the week when the barber works.
>
> The barber works on Mondays, Wednesdays, and Fridays.
>
> Step 2: Determine the day of the week on which Daniel went for a haircut.
>
> Daniel went for a haircut on Sunday.
>
> Step 3: Compare the day of the week on which Daniel went for a haircut to the days when the barber works.
>
> Sunday is not one of the days when the barber works.
>
> Step 4: Determine if the statement makes logical sense.
>
> The statement does not make logical sense because Daniel went for a haircut on a day when the barber does not work.
>
> Answer: No, the statement does not make logical sense. Daniel should have gone for a haircut on a day when the barber works, such as Monday, Wednesday, or Friday.
<br>
Example 3:
**User**:
>You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
**mpt-7b-dolphin**:
> Subject: Invitation to Dinner Party on Friday
>
> Dear Friend,
>
> I hope this email finds you well. I am writing to invite you to a dinner party that I am hosting on Friday. The party will be a casual gathering, and I would love to have you join us.
>
> The dinner will be held at my home, and the time is from 7:00 pm to 10:00 pm. The address is 123 Main Street, San Francisco, CA 94122.
>
> The dinner will be a potluck, so please bring a dish to share. I will provide the main course, so you can focus on bringing a side dish, dessert, or appetizer.
>
> Please RSVP by replying to this email or by calling me at 555-555-5555. I look forward to seeing you on Friday!
>
> Yours,
>
> Your Friendly Assistant
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
| n_parameters | 6.65B |
| n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Finetuning Description
This model was trained on a single H100 (80 GB PCIe) for about 12 hours using the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
Run: July 5, 2023
* args: {'lr': 2e-5, 'num_epochs': 1, 'seed': 43}
* logs: {'train_runtime': 61098.1062, 'train_samples_per_second': 1.637, 'train_steps_per_second': 0.409, 'train_loss': 1.4058428125, 'epoch': 1.0}

The above loss curve was generated from the tfevents log at [link](https://github.com/daniel-furman/sft-demos/tree/main/runs/jul_5_23_3_15_00_sft-mpt-7b-dolphin).
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## How to Use
Basic Usage: [Notebook](assets/basic_inference_mpt-7b-dolphin.ipynb)
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
First, install the package dependencies:
```python
!pip install -q -U transformers einops accelerate torch
!pip install -q -U triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python
```
Basic model loading:
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'dfurman/mpt-7b-dolphin',
trust_remote_code=True,
device_map="auto",
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'dfurman/mpt-7b-dolphin'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'meta'
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. Note that larger context windows require more available VRAM. For example:
```python
import transformers
name = 'dfurman/mpt-7b-dolphin'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True,
device_map="auto",
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. It can be invoked directly from this model's repo:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('dfurman/mpt-7b-dolphin')
```
Once loaded, the model and tokenizer can be used with the following code:
```python
import transformers
import torch
# text generation function
def mpt_generate(
model: transformers.AutoModelForCausalLM,
tokenizer: transformers.AutoTokenizer,
prompt: str,
max_new_tokens: int = 128,
temperature: float = 0.92,
) -> str:
"""
Initialize the pipeline
Uses Hugging Face GenerationConfig defaults
https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig
Args:
model (transformers.AutoModelForCausalLM): Model for text generation
tokenizer (transformers.AutoTokenizer): Tokenizer for model
prompt (str): Prompt for text generation
max_new_tokens (int, optional): Max new tokens after the prompt to generate.
Defaults to 128.
temperature (float, optional): The value used to modulate the next token probabilities.
Defaults to 1.0
"""
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(
prompt,
return_tensors="pt",
return_token_type_ids=False,
).to(device)
# when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.
with torch.autocast("cuda", dtype=torch.bfloat16):
response = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=temperature,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
decoded_output = tokenizer.decode(
response["sequences"][0],
skip_special_tokens=True,
) # grab output in natural language
return decoded_output[len(prompt) :] # remove prompt from output
```
We can now generate text! For example:
```python
prompt = "You are a helpful assistant. Here is a recipe for vegan banana bread:\n"
response = mpt_generate(
model,
tokenizer,
prompt,
max_new_tokens=150,
temperature=0.92,
)
print(response)
```
### Runtime tests
| runtime / 50 tokens (sec) | GPU | attn | torch dtype | VRAM (GB) |
|:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
| 0.61 | 1x H100 (80 GB PCIe) | triton | bfloat16 | 12 |
| 0.67 | 1x H100 (80 GB PCIe) | torch | bfloat16 | 12 |
| 1.17 | 1x A100 (40 GB SXM) | triton | bfloat16 | 13 |
| 1.36 | 1x A100 (40 GB SXM) | torch | bfloat16 | 13 |
| 2.25 | 1x V100 (16 GB SXM) | torch | float16 | 13 |
| 3.75 | 1x V100 (16 GB SXM) | torch | fp4 | 4 |
| 4.84 | 1x Tesla T4 (15 GB) | torch | float16 | 13 |
| 8.77 | 1x Tesla T4 (15 GB) | torch | fp4 | 4 |
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/inf_tests/runtimes_mpt_7b_dolphin.ipynb).
## Acknowledgements
This model was finetuned by Daniel Furman on July 5, 2023 and is intended primarily for research purposes.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## MosaicML Citation for MPT-7B
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-07-02},
urldate = {2023-07-02}
}
``` | 12,964 | [
[
-0.050994873046875,
-0.0504150390625,
0.04010009765625,
0.0136260986328125,
-0.0267486572265625,
-0.00731658935546875,
0.00658416748046875,
-0.03521728515625,
0.0230560302734375,
0.0290985107421875,
-0.05072021484375,
-0.03265380859375,
-0.0338134765625,
0.0... |
EgilKarlsen/GPT2_CSIC-Anomaly_Baseline | 2023-09-11T03:13:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | EgilKarlsen | null | null | EgilKarlsen/GPT2_CSIC-Anomaly_Baseline | 0 | 2 | transformers | 2023-07-03T16:19:36 | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GPT2_CSIC-Anomaly_Baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_CSIC-Anomaly_Baseline
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6128
- Accuracy: 0.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7143 | 1.0 | 437 | 0.7182 | 0.4122 |
| 0.6603 | 2.0 | 875 | 0.6668 | 0.6196 |
| 0.6439 | 3.0 | 1312 | 0.6421 | 0.6738 |
| 0.6327 | 4.0 | 1750 | 0.6301 | 0.6722 |
| 0.6279 | 5.0 | 2187 | 0.6215 | 0.6672 |
| 0.6231 | 6.0 | 2625 | 0.6159 | 0.6658 |
| 0.6203 | 7.0 | 3062 | 0.6137 | 0.6652 |
| 0.6166 | 8.0 | 3500 | 0.6105 | 0.6634 |
| 0.6183 | 9.0 | 3937 | 0.6102 | 0.6642 |
| 0.615 | 9.99 | 4370 | 0.6096 | 0.6634 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| 1,950 | [
[
-0.0330810546875,
-0.040374755859375,
0.0158233642578125,
0.00543212890625,
-0.00948333740234375,
-0.0171051025390625,
0.006427764892578125,
-0.0166473388671875,
0.005008697509765625,
0.0198822021484375,
-0.054595947265625,
-0.04827880859375,
-0.0595703125,
... |
TimKond/diffusion-detection | 2023-07-03T17:31:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagenet-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | TimKond | null | null | TimKond/diffusion-detection | 0 | 2 | transformers | 2023-07-03T16:50:48 | ---
tags:
- generated_from_trainer
model-index:
- name: diffusion-detection
results: []
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diffusion-detection
This model was trained to distinguish real world images (negative) from machine generated ones (postive).
## Model usage
```python
from transformers import BeitImageProcessor, BeitForImageClassification
from PIL import Image
processor = BeitImageProcessor.from_pretrained('TimKond/diffusion-detection')
model = BeitForImageClassification.from_pretrained('TimKond/diffusion-detection')
image = Image.open("2980_saltshaker.jpg")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
## Training and evaluation data
[BEiT-base-patch16-224-pt22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k) was loaded as a base model for further fine tuning:
As negatives a subsample of 10.000 images from [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) was used. Complementary 10.000 positive images were generated using [Realistic_Vision_V1.4](https://huggingface.co/SG161222/Realistic_Vision_V1.4).
The labels from imagenet-1k were used as prompts for image generation. [GitHub reference](https://github.com/TimKond/diffusion-detection/blob/main/data/DatasetGeneration.py)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 1.11.0+cu113
- Datasets 2.12.0
- Tokenizers 0.13.3 | 2,088 | [
[
-0.051422119140625,
-0.05712890625,
0.01800537109375,
0.0038299560546875,
-0.03314208984375,
-0.03338623046875,
0.006256103515625,
-0.03369140625,
0.00251007080078125,
0.01434326171875,
-0.0306854248046875,
-0.04473876953125,
-0.061248779296875,
-0.007617950... |
uygarkurt/convbert-restore-punctuation-turkish | 2023-10-31T13:23:05.000Z | [
"transformers",
"pytorch",
"convbert",
"token-classification",
"punctuation restoration",
"punctuation prediction",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | uygarkurt | null | null | uygarkurt/convbert-restore-punctuation-turkish | 0 | 2 | transformers | 2023-07-03T19:10:13 | ---
license: mit
language:
- tr
tags:
- punctuation restoration
- punctuation prediction
widget:
- text: "Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar Doğu Trakya'da Traklar olmak üzere Hititler Frigler Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı"
---
# Transformer Based Punctuation Restoration Models for Turkish
<div float="center">
<a href="https://github.com/uygarkurt/Turkish-Punctuation-Restoration">
<img alt="open-source-image"
src="https://img.shields.io/badge/GitHub-repository-green?logo=GitHub">
</a>
</div>
<div align="center">
<p>Liked our work? give us a ⭐ on GitHub!</p>
</div>
You can find the BERT model used in the paper [Transformer Based Punctuation Restoration for Turkish](https://ieeexplore.ieee.org/document/10286690). Aim of this work is correctly place pre-decided punctuation marks in a given text. We present three pre-trained transformer models to predict **period(.)**, **comma(,)** and **question(?)** marks for the Turkish language.
## Usage <a class="anchor" id="usage"></a>
### Inference <a class="anchor" id="inference"></a>
Recommended usage is via HuggingFace. You can run an inference using the pre-trained BERT model with the following code:
```
from transformers import pipeline
pipe = pipeline(task="token-classification", model="uygarkurt/convbert-restore-punctuation-turkish")
sample_text = "Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar Doğu Trakya'da Traklar olmak üzere Hititler Frigler Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı"
out = pipe(sample_text)
```
To use a different pre-trained model you can just replace the `model` argument with one of the other [available models](#models) we provided.
## Data <a class="anchor" id="data"></a>
Dataset is provided in `data/` directory as train, validation and test splits.
Dataset can be summarized as below:
| Split | Total | Period (.) | Comma (,) | Question (?) |
|:-----------:|:-------:|:----------:|:---------:|:------------:|
| Train | 1471806 | 124817 | 98194 | 9816 |
| Validation | 180326 | 15306 | 11980 | 1199 |
| Test | 182487 | 15524 | 12242 | 1255 |
## Available Models <a class="anchor" id="models"></a>
We experimented with BERT, ELECTRA and ConvBERT. Pre-trained models can be accessed via Huggingface.
BERT: https://huggingface.co/uygarkurt/bert-restore-punctuation-turkish \
ELECTRA: https://huggingface.co/uygarkurt/electra-restore-punctuation-turkish \
ConvBERT: https://huggingface.co/uygarkurt/convbert-restore-punctuation-turkish
## Results <a class="results" id="results"></a>
`Precision` and `Recall` and `F1` scores for each model and punctuation mark are summarized below.
| Model | | PERIOD | | | COMMA | | | QUESTION | | | OVERALL | |
|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
|Score Type| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| BERT | 0.972602 | 0.947504 | 0.959952 | 0.576145 | 0.700010 | 0.632066 | 0.927642 | 0.911342 | 0.919420 | 0.825506 | 0.852952 | 0.837146 |
| ELECTRA | 0.972602 | 0.948689 | 0.960497 | 0.576800 | 0.710208 | 0.636590 | 0.920325 | 0.921074 | 0.920699 | 0.823242 | 0.859990 | 0.839262 |
| ConvBERT | 0.972731 | 0.946791 | 0.959585 | 0.576964 | 0.708124 | 0.635851 | 0.922764 | 0.913849 | 0.918285 | 0.824153 | 0.856254 | 0.837907 |
## Citation <a class="anchor" id="citation"></a>
```
@INPROCEEDINGS{10286690,
author={Kurt, Uygar and Çayır, Aykut},
booktitle={2023 8th International Conference on Computer Science and Engineering (UBMK)},
title={Transformer Based Punctuation Restoration for Turkish},
year={2023},
volume={},
number={},
pages={169-174},
doi={10.1109/UBMK59864.2023.10286690}
}
``` | 4,452 | [
[
-0.021881103515625,
-0.0699462890625,
0.0309906005859375,
0.017303466796875,
-0.023284912109375,
-0.0079345703125,
-0.021697998046875,
-0.016754150390625,
0.0114898681640625,
0.025634765625,
-0.04052734375,
-0.051513671875,
-0.03033447265625,
0.0347290039062... |
DunnBC22/bert-base-cased-finetuned-ner-NCBI_Disease | 2023-07-20T22:06:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"medical",
"science",
"en",
"dataset:ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | DunnBC22 | null | null | DunnBC22/bert-base-cased-finetuned-ner-NCBI_Disease | 2 | 2 | transformers | 2023-07-04T04:33:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
- medical
- science
datasets:
- ncbi_disease
model-index:
- name: bert-base-cased-finetuned-ner-NCBI_Disease
results: []
language:
- en
metrics:
- seqeval
- f1
- recall
- accuracy
- precision
pipeline_tag: token-classification
---
# bert-base-cased-finetuned-ner-NCBI_Disease
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Disease:
- Precision: 0.8063891577928364
- Recall: 0.8677083333333333
- F1: 0.8359257400903161
- Number: 960
- Overall
- Precision: 0.8064
- Recall: 0.8677
- F1: 0.8359
- Accuracy: 0.9825
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Token%20Classification/Monolingual/NCBI_Disease/NER%20Project%20Using%20NCBI_Disease%20Dataset.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Data Source: https://huggingface.co/datasets/ncbi_disease
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Disease Precision | Disease Recall | Disease F1 | Disease Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-----------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:--------:|:-----------------:|:--------------:|:----------:|:-------:|
| 0.0525 | 1.0 | 340 | 0.0617 | 0.7813 | 0.7854 | 0.7834 | 960 | 0.7813 | 0.7854 | 0.7834 | 0.9796 |
| 0.022 | 2.0 | 680 | 0.0551 | 0.7897 | 0.8646 | 0.8255 | 960 | 0.7897 | 0.8646 | 0.8255 | 0.9819 |
| 0.0154 | 3.0 | 1020 | 0.0614 | 0.8064 | 0.8677 | 0.8359 | 960 | 0.8064 | 0.8677 | 0.8359 | 0.9825 |
* All values in the above chart are rounded to the nearest ten-thousandth.
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 2,381 | [
[
-0.0411376953125,
-0.0477294921875,
0.0198211669921875,
0.00077056884765625,
-0.009063720703125,
-0.01195526123046875,
0.00011038780212402344,
-0.01485443115234375,
0.043731689453125,
0.033721923828125,
-0.0465087890625,
-0.0552978515625,
-0.046539306640625,
... |
BaoKien/xlnet-base-cased-finetuned-squad-v2 | 2023-07-04T11:33:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | BaoKien | null | null | BaoKien/xlnet-base-cased-finetuned-squad-v2 | 0 | 2 | transformers | 2023-07-04T07:18:15 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlnet-base-cased-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-squad-v2
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2719 | 1.0 | 8265 | 0.2361 |
| 0.172 | 2.0 | 16530 | 0.2484 |
| 0.1236 | 3.0 | 24795 | 0.3111 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,413 | [
[
-0.0291595458984375,
-0.035552978515625,
0.00836181640625,
0.01454925537109375,
-0.0243682861328125,
-0.0145263671875,
-0.004268646240234375,
-0.01099395751953125,
0.004913330078125,
0.033935546875,
-0.06878662109375,
-0.03863525390625,
-0.04132080078125,
-0... |
KPF/KPF-bert-cls2 | 2023-07-04T08:53:57.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | KPF | null | null | KPF/KPF-bert-cls2 | 0 | 2 | transformers | 2023-07-04T07:48:09 | # KPF-BERT-CLS2
- [빅카인즈랩](https://lab.bigkinds.or.kr/) 인사이드 메뉴의 지역뉴스에서 사용된 세분류 예측 모델이며 지역을 제외한 세분류 결과를 나타낸다.
- 사용 방법에 대한 안내 및 코드는 [KPF-bigkinds github](https://github.com/KPF-bigkinds/BIGKINDS-LAB/tree/main/KPF-BERT-CLS)에서 확인할 수 있습니다.
## 모델 소개
### KPF-BERT-CLS
한국언론진흥재단이 개발한 kpf-BERT 모델을 기반으로 CLS(Classification) task를 수행할 수 있는 kpf-BERT-cls 모델을 설계 및 개발하였다.
- 본 예제에 사용된 kpf-BERT는 [kpfBERT](https://github.com/KPFBERT/kpfbert)에 공개되어 있다.
- 본 예제에서는 대분류, 지역을 제외한 대분류들의 세분류, 지역 세분류로 구분하여 데이터를 학습한다.
학습데이터는 기사내용과 분류명을 넣어 제작하였다. 분류명은 아래의 분류체계를 따르며, 기사내용 + 대분류(지역제외) 데이터셋, 기사내용 + 세분류(지역제외) 데이터셋, 기사내용 + 지역세분류 데이터셋으로 나누어 학습을 진행했다.

한국언론진흥재단이 개발한 kpf-BERT를 기반으로 classification layer를 추가하여 kpf-BERT-cls 모델을 개발한다. kpf-BERT-cls 모델은 기사를 입력받아 kpf-BERT 토크나이저를 사용하여 해당 기사가 어느 클래스에 속하는지 예측한다.
기본 BERT 모델의 구조와 토크나이저는 아래의 그림과 같다.


BERT는 입력 길이의 제한으로 512 subword 이하의 값만 입력받을 수 있다. 기사의 특성상 인터뷰 등의 글은 512 subword보다 긴 것이 대부분이다. 이를 해결하기 위해 본 과제에서는 stride를 주어 독립적으로 문서의 조각들을 처리한다.

kpf-BERT-cls는 대분류 예측 모델, 세분류 예측 모델, 지역 세분류 예측 모델로 구성되어 있다. 대분류/세분류 예측 모델은 top-3 결과를 출력한다.

| 1,618 | [
[
-0.044769287109375,
-0.0186004638671875,
0.03497314453125,
0.0284423828125,
-0.03656005859375,
0.0023860931396484375,
0.005096435546875,
-0.041168212890625,
0.023529052734375,
0.0134429931640625,
-0.0285797119140625,
-0.048858642578125,
-0.055572509765625,
-... |
robookwus/climate-change-belief | 2023-07-04T09:21:07.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | robookwus | null | null | robookwus/climate-change-belief | 1 | 2 | transformers | 2023-07-04T08:33:40 | ---
license: mit
widget:
- text: "human-made climate change is a hoax spread by liberal media!"
---
Classifiying whether a person believes in human-made climate change or not. Neutral or factual news are also options.
Albert model trained with data from: https://www.kaggle.com/datasets/edqian/twitter-climate-change-sentiment-dataset
--- | 339 | [
[
-0.029052734375,
-0.02520751953125,
0.0245819091796875,
0.0164031982421875,
-0.006168365478515625,
0.0328369140625,
-0.00740814208984375,
-0.0123443603515625,
0.032379150390625,
0.023895263671875,
-0.042449951171875,
-0.058197021484375,
-0.027374267578125,
0... |
Anwaarma/EnhancedModel | 2023-07-04T10:32:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | Anwaarma | null | null | Anwaarma/EnhancedModel | 0 | 2 | transformers | 2023-07-04T10:26:40 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EnhancedModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EnhancedModel
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0455
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2872 | 0.05 | 50 | 0.1340 | 0.94 |
| 0.1292 | 0.11 | 100 | 0.1201 | 0.96 |
| 0.1141 | 0.16 | 150 | 0.0616 | 0.98 |
| 0.1242 | 0.21 | 200 | 0.0554 | 0.98 |
| 0.1148 | 0.27 | 250 | 0.1123 | 0.98 |
| 0.0798 | 0.32 | 300 | 0.0946 | 0.98 |
| 0.1207 | 0.37 | 350 | 0.1145 | 0.98 |
| 0.0655 | 0.43 | 400 | 0.0778 | 0.98 |
| 0.084 | 0.48 | 450 | 0.0472 | 0.99 |
| 0.0783 | 0.53 | 500 | 0.0603 | 0.97 |
| 0.1034 | 0.58 | 550 | 0.0806 | 0.98 |
| 0.0556 | 0.64 | 600 | 0.0598 | 0.99 |
| 0.0702 | 0.69 | 650 | 0.0638 | 0.99 |
| 0.0665 | 0.74 | 700 | 0.0971 | 0.98 |
| 0.0481 | 0.8 | 750 | 0.0956 | 0.98 |
| 0.0474 | 0.85 | 800 | 0.0444 | 0.99 |
| 0.0661 | 0.9 | 850 | 0.0550 | 0.99 |
| 0.0522 | 0.96 | 900 | 0.0397 | 0.99 |
| 0.0857 | 1.01 | 950 | 0.0484 | 0.99 |
| 0.027 | 1.06 | 1000 | 0.0655 | 0.99 |
| 0.0224 | 1.12 | 1050 | 0.0548 | 0.99 |
| 0.0177 | 1.17 | 1100 | 0.0573 | 0.99 |
| 0.0278 | 1.22 | 1150 | 0.0400 | 0.99 |
| 0.0376 | 1.28 | 1200 | 0.0499 | 0.99 |
| 0.0412 | 1.33 | 1250 | 0.0455 | 0.98 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,806 | [
[
-0.044189453125,
-0.04656982421875,
0.007793426513671875,
0.0033702850341796875,
-0.006389617919921875,
-0.0142974853515625,
0.0006499290466308594,
-0.0085906982421875,
0.0306549072265625,
0.0255584716796875,
-0.0511474609375,
-0.0504150390625,
-0.05221557617187... |
Anwaarma/autotrain-enhancedauto-72049138835 | 2023-07-04T10:47:14.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Anwaarma/autotrain-data-enhancedauto",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Anwaarma | null | null | Anwaarma/autotrain-enhancedauto-72049138835 | 0 | 2 | transformers | 2023-07-04T10:42:11 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Anwaarma/autotrain-data-enhancedauto
co2_eq_emissions:
emissions: 3.3106524610859784
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 72049138835
- CO2 Emissions (in grams): 3.3107
## Validation Metrics
- Loss: 0.042
- Accuracy: 0.990
- Precision: 0.994
- Recall: 0.935
- AUC: 0.997
- F1: 0.964
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Anwaarma/autotrain-enhancedauto-72049138835
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138835", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138835", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,145 | [
[
-0.0279998779296875,
-0.032806396484375,
0.01441192626953125,
0.0105743408203125,
-0.0014514923095703125,
0.00023317337036132812,
0.00823974609375,
-0.01552581787109375,
-0.0007481575012207031,
0.01473236083984375,
-0.0555419921875,
-0.03289794921875,
-0.0630493... |
ceefax/distilhubert-finetuned-gtzan | 2023-07-04T16:34:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | audio-classification | ceefax | null | null | ceefax/distilhubert-finetuned-gtzan | 0 | 2 | transformers | 2023-07-04T14:58:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5778
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7683 | 1.0 | 113 | 1.8297 | 0.53 |
| 1.175 | 2.0 | 226 | 1.2060 | 0.67 |
| 0.9578 | 3.0 | 339 | 0.9063 | 0.72 |
| 0.5966 | 4.0 | 452 | 0.7675 | 0.76 |
| 0.461 | 5.0 | 565 | 0.6908 | 0.77 |
| 0.2916 | 6.0 | 678 | 0.5942 | 0.85 |
| 0.2538 | 7.0 | 791 | 0.6129 | 0.82 |
| 0.3156 | 8.0 | 904 | 0.5881 | 0.82 |
| 0.2019 | 9.0 | 1017 | 0.5949 | 0.81 |
| 0.1736 | 10.0 | 1130 | 0.5778 | 0.81 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,971 | [
[
-0.03424072265625,
-0.039306640625,
0.0118255615234375,
0.00553131103515625,
-0.024566650390625,
-0.0232696533203125,
-0.00820159912109375,
-0.01493072509765625,
0.01312255859375,
0.01230621337890625,
-0.054931640625,
-0.048553466796875,
-0.049957275390625,
... |
parkervg/destt5-schema-prediction | 2023-07-04T19:51:16.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text2sql",
"en",
"dataset:splash",
"arxiv:2305.19974",
"arxiv:2005.02539",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | parkervg | null | null | parkervg/destt5-schema-prediction | 0 | 2 | transformers | 2023-07-04T16:58:08 | ---
language:
- en
tags:
- text2sql
datasets:
- splash
widget:
- text: "Give the name, population, and head of state for the country that has the largest area. || select name, population, continent from country order by surfacearea desc limit 1 || | world_1 | city : id, name, countrycode, district, population | sqlite_sequence : name, seq | country : code, name, continent, region, surfacearea, indepyear, population, lifeexpectancy, gnp, gnpold, localname, governmentform, headofstate, capital, code2 | countrylanguage : countrycode, language, isofficial, percentage || swap continent with head of state because it is not required."
---
## parkervg/destt5-schema-prediction
Fine-tuned weights for the schema prediction model described in [Correcting Semantic Parses with Natural Language through Dynamic
Schema Encoding](https://arxiv.org/pdf/2305.19974.pdf), based on [t5-large](https://huggingface.co/t5-large).
### Training Data
The model has been fine-tuned on the 7,481 training examples in the [SPLASH interactive semantic parsing dataset](https://github.com/MSR-LIT/Splash).
### Training Objective
This model was initialized with [t5-large](https://huggingface.co/t5-large) and fine-tuned with the text-to-text generation objective.
As this model works in the interactive setting, we utilize the standard text2sql features such as `question` and `db_schema`, in addition to `feedback` and `incorrect_parse`.
```
[question] || [incorrect_parse] || [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... || [feedback]
```
The model then attempts to predict those schema items that appear in the final gold SQL query, prefaced by the `db_id`.
```
[db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ...
```
### Performance
This model achieves 88.98% F1 score in identifying schema items on the SPLASH test set.
When combined with the [destt5-text2sql model](https://huggingface.co/parkervg/destt5-text2sql), it achieves 53.43% correction accuracy (exact-match) on the SPLASH test set.
### References
1. [Correcting Semantic Parses with Natural Language through Dynamic
Schema Encoding](https://arxiv.org/pdf/2305.19974.pdf)
2. [DestT5 codebase](https://github.com/parkervg/destt5)
3. [Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback](https://arxiv.org/pdf/2005.02539v2.pdf)
### Citation
```bibtex
@inproceedings{glenn2023correcting,
author = {Parker Glenn, Parag Pravin Dakle, Preethi Raghavan},
title = "Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI",
publisher = "Association for Computational Linguistics",
year = "2023"
}
```
| 2,806 | [
[
-0.0020046234130859375,
-0.06732177734375,
0.0343017578125,
0.022430419921875,
-0.02471923828125,
0.0030345916748046875,
-0.016326904296875,
-0.036376953125,
0.0081329345703125,
0.051177978515625,
-0.0428466796875,
-0.063720703125,
-0.056976318359375,
0.0282... |
luhx/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-04T19:16:36.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | luhx | null | null | luhx/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-04T19:16:07 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luhx -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luhx -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga luhx
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,744 | [
[
-0.0426025390625,
-0.039764404296875,
0.0209808349609375,
0.024871826171875,
-0.0106353759765625,
-0.018585205078125,
0.00969696044921875,
-0.01342010498046875,
0.013702392578125,
0.0235443115234375,
-0.07122802734375,
-0.03546142578125,
-0.0243377685546875,
... |
Officialletai/SpaceInvadersNoFrameskip-v4 | 2023-07-04T19:58:10.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Officialletai | null | null | Officialletai/SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-04T19:56:05 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 660.50 +/- 209.91
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Officialletai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Officialletai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Officialletai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,772 | [
[
-0.04345703125,
-0.0390625,
0.0204315185546875,
0.0261993408203125,
-0.01369476318359375,
-0.018402099609375,
0.00986480712890625,
-0.0141143798828125,
0.01264190673828125,
0.0203857421875,
-0.07000732421875,
-0.033355712890625,
-0.0237579345703125,
-0.00214... |
Huggingfly/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-04T20:26:00.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Huggingfly | null | null | Huggingfly/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-04T20:25:25 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 566.50 +/- 172.35
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Huggingfly -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Huggingfly -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Huggingfly
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,765 | [
[
-0.04296875,
-0.038330078125,
0.017333984375,
0.0268707275390625,
-0.0117034912109375,
-0.01499176025390625,
0.00992584228515625,
-0.0145416259765625,
0.01459503173828125,
0.01953125,
-0.07177734375,
-0.034881591796875,
-0.0252227783203125,
-0.00316619873046... |
GalSarid/setfit-movie-genre-sentence-t5-xl | 2023-07-07T20:04:50.000Z | [
"sentence-transformers",
"pytorch",
"t5",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-classification | GalSarid | null | null | GalSarid/setfit-movie-genre-sentence-t5-xl | 1 | 2 | sentence-transformers | 2023-07-04T21:34:54 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# GalSarid/setfit-movie-genre-sentence-t5-xl
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("GalSarid/setfit-movie-genre-sentence-t5-xl")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,573 | [
[
-0.007076263427734375,
-0.060699462890625,
0.025543212890625,
-0.0079345703125,
-0.0138092041015625,
-0.00971221923828125,
-0.0139312744140625,
0.0026340484619140625,
0.0005483627319335938,
0.033447265625,
-0.04925537109375,
-0.0201568603515625,
-0.0376586914062... |
gvij/gpt-j-code-alpaca-instruct | 2023-07-04T23:07:51.000Z | [
"peft",
"gpt-j",
"gpt-j-6b",
"code",
"instruct",
"instruct-code",
"code-alpaca",
"alpaca-instruct",
"alpaca",
"llama7b",
"gpt2",
"dataset:ewof/code-alpaca-instruct-unfiltered",
"region:us"
] | null | gvij | null | null | gvij/gpt-j-code-alpaca-instruct | 0 | 2 | peft | 2023-07-04T22:17:52 | ---
datasets:
- ewof/code-alpaca-instruct-unfiltered
library_name: peft
tags:
- gpt-j
- gpt-j-6b
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- llama7b
- gpt2
---
We finetuned GPT-J 6B on Code-Alpaca-Instruct Dataset (ewof/code-alpaca-instruct-unfiltered) for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 206 minutes and costed us only `$8` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: EleutherAI/gpt-j-6b
- Dataset: ewof/code-alpaca-instruct-unfiltered
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
Loss metrics:

---
license: apache-2.0
--- | 997 | [
[
-0.054779052734375,
-0.05810546875,
0.0278167724609375,
-0.0014295578002929688,
-0.0357666015625,
-0.034149169921875,
-0.030609130859375,
-0.03594970703125,
-0.01293182373046875,
0.039520263671875,
-0.048492431640625,
-0.0234375,
-0.030731201171875,
-0.00193... |
DunnBC22/dit-base-Document_Classification-Desafio_1 | 2023-07-22T20:39:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"en",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | DunnBC22 | null | null | DunnBC22/dit-base-Document_Classification-Desafio_1 | 1 | 2 | transformers | 2023-07-05T01:00:14 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: dit-base-Document_Classification-Desafio_1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: validation
split: train
args: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9865
language:
- en
---
# dit-base-Document_Classification-Desafio_1
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base).
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Accuracy: 0.9865
- F1
- Weighted: 0.9865
- Micro: 0.9865
- Macro: 0.9863
- Recall
- Weighted: 0.9865
- Micro: 0.9865
- Macro: 0.9861
- Precision
- Weighted: 0.9869
- Micro: 0.9865
- Macro: 0.9870
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Document%20AI/Multiclass%20Classification/Document%20Classification%20-%20Desafio%201/Document%20Classification%20-%20Desafio%201.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/rywgar/document-classification-desafio-1
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.8316 | 0.99 | 62 | 0.7519 | 0.743 | 0.7020 | 0.743 | 0.7015 | 0.743 | 0.743 | 0.7430 | 0.6827 | 0.743 | 0.6819 |
| 0.3561 | 2.0 | 125 | 0.2302 | 0.9395 | 0.9401 | 0.9395 | 0.9400 | 0.9395 | 0.9395 | 0.9394 | 0.9482 | 0.9395 | 0.9480 |
| 0.2222 | 2.99 | 187 | 0.1350 | 0.956 | 0.9564 | 0.956 | 0.9561 | 0.956 | 0.956 | 0.9551 | 0.9598 | 0.956 | 0.9600 |
| 0.1705 | 4.0 | 250 | 0.0873 | 0.9725 | 0.9727 | 0.9725 | 0.9725 | 0.9725 | 0.9725 | 0.9721 | 0.9740 | 0.9725 | 0.9740 |
| 0.1541 | 4.99 | 312 | 0.0642 | 0.9825 | 0.9825 | 0.9825 | 0.9824 | 0.9825 | 0.9825 | 0.9822 | 0.9830 | 0.9825 | 0.9830 |
| 0.1253 | 6.0 | 375 | 0.0330 | 0.9915 | 0.9915 | 0.9915 | 0.9914 | 0.9915 | 0.9915 | 0.9913 | 0.9916 | 0.9915 | 0.9916 |
| 0.1196 | 6.99 | 437 | 0.0524 | 0.982 | 0.9822 | 0.982 | 0.9820 | 0.982 | 0.982 | 0.9817 | 0.9832 | 0.982 | 0.9832 |
| 0.0896 | 7.94 | 496 | 0.0436 | 0.9865 | 0.9865 | 0.9865 | 0.9863 | 0.9865 | 0.9865 | 0.9861 | 0.9869 | 0.9865 | 0.9870 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 | 3,994 | [
[
-0.04559326171875,
-0.03558349609375,
0.0217437744140625,
0.00223541259765625,
0.0004677772521972656,
0.0090179443359375,
0.0021152496337890625,
-0.004459381103515625,
0.03704833984375,
0.01238250732421875,
-0.0478515625,
-0.051727294921875,
-0.0601806640625,
... |
abhinavkulkarni/mosaicml-mpt-7b-chat-w4-g128-awq | 2023-09-12T13:08:59.000Z | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"MosaicML",
"AWQ",
"custom_code",
"license:cc-by-sa-3.0",
"text-generation-inference",
"region:us"
] | text-generation | abhinavkulkarni | null | null | abhinavkulkarni/mosaicml-mpt-7b-chat-w4-g128-awq | 0 | 2 | transformers | 2023-07-05T07:20:39 | ---
license: cc-by-sa-3.0
tags:
- MosaicML
- AWQ
inference: false
---
# MPT-7B-Chat (4-bit 128g AWQ Quantized)
[MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) is a chatbot-like model for dialogue generation.
This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq).
## Model Date
July 5, 2023
## Model License
Please refer to original MPT model license ([link](https://huggingface.co/mosaicml/mpt-7b-chat)).
Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
## CUDA Version
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher.
For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
## How to Use
```bash
git clone https://github.com/mit-han-lab/llm-awq \
&& cd llm-awq \
&& git checkout f084f40bd996f3cf3a0633c1ad7d9d476c318aaa \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
```
```python
import time
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import snapshot_download
model_name = "abhinavkulkarni/mosaicml-mpt-7b-chat-w4-g128-awq"
# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
# Tokenizer
try:
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True)
except:
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
# Model
w_bit = 4
q_config = {
"zero_point": True,
"q_group_size": 128,
}
load_quant = snapshot_download(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config,
torch_dtype=torch.float16, trust_remote_code=True)
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model.tie_weights()
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
inputs=input_ids,
temperature=0.7,
max_new_tokens=512,
top_p=0.15,
top_k=0,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer)
```
## Evaluation
This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness).
[MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|13.5936| | |
| | |byte_perplexity| 1.6291| | |
| | |bits_per_byte | 0.7040| | |
[MPT-7B-Chat (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/mosiacml-mpt-7b-chat-w4-g128-awq)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|14.0922| | |
| | |byte_perplexity| 1.6401| | |
| | |bits_per_byte | 0.7138| | |
## Acknowledgements
The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
```
@article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
}
```
| 4,562 | [
[
-0.0289764404296875,
-0.046600341796875,
0.0299224853515625,
0.0261077880859375,
-0.0312347412109375,
0.0021724700927734375,
0.00428009033203125,
-0.01119232177734375,
-0.01154327392578125,
0.0015764236450195312,
-0.039703369140625,
-0.0322265625,
-0.04858398437... |
remshu-inc/mmark | 2023-07-07T05:43:28.000Z | [
"keras",
"license:mit",
"region:us"
] | null | remshu-inc | null | null | remshu-inc/mmark | 0 | 2 | keras | 2023-07-05T11:06:24 | ---
license: mit
---
Модель предназначена для решения задачи определения оценки за ученический текст на немецком языке.
Модель представляет собой полносвязную нейронную сеть с 10-ю входными нейронами, 25 нейронами в первом скрытом слое, 11 нейронами во втором скрытом слое, 4 – нейронами в третьем скрытом слое, 1 нейроном в выходном слое.
На вход поступают нормированные на количество токенов в тексте значения:
* количество грамматических ошибок в тексте;
* количество лексических ошибок в тексте;
* количество синтаксических ошибок в тексте;
* количество орфографических ошибок в тексте;
* количество дискурсивных ошибок в тексте;
* количество пропусков слов в тексте;
* количество лишних слов в тексте;
* количество ошибок с уровнем грубости 1;
* количество ошибок с уровнем грубости 2;
* количество ошибок с уровнем грубости 3.
На выходе модель выдает значение оценки за текст по 12-балльной шкале. 1 соответствует минимальной оценке, 12 -- максимальной.
Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
| 1,098 | [
[
-0.035552978515625,
-0.03289794921875,
0.01509857177734375,
0.0147857666015625,
-0.04071044921875,
0.025115966796875,
0.0163421630859375,
-0.0164031982421875,
0.04632568359375,
0.00707244873046875,
-0.041778564453125,
-0.038909912109375,
-0.03338623046875,
-... |
msladic/a2c-AntBulletEnv-v0 | 2023-07-18T09:57:24.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | msladic | null | null | msladic/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-05T12:50:36 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1346.23 +/- 116.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
MysticShadow427/roberta-meld | 2023-07-05T17:21:14.000Z | [
"keras",
"region:us"
] | null | MysticShadow427 | null | null | MysticShadow427/roberta-meld | 0 | 2 | keras | 2023-07-05T17:19:52 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 840 | [
[
-0.038421630859375,
-0.0396728515625,
0.031463623046875,
0.007801055908203125,
-0.044036865234375,
-0.0182342529296875,
0.0100555419921875,
-0.004604339599609375,
0.0193939208984375,
0.0300140380859375,
-0.043792724609375,
-0.05133056640625,
-0.0396728515625,
... |
aronmal/dqn-SpaceInvaders-v4 | 2023-07-05T18:01:03.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | aronmal | null | null | aronmal/dqn-SpaceInvaders-v4 | 0 | 2 | stable-baselines3 | 2023-07-05T18:00:21 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 551.00 +/- 161.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aronmal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aronmal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aronmal
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,756 | [
[
-0.043182373046875,
-0.039337158203125,
0.0185394287109375,
0.02471923828125,
-0.010986328125,
-0.017303466796875,
0.01070404052734375,
-0.012939453125,
0.012603759765625,
0.0225677490234375,
-0.0723876953125,
-0.03460693359375,
-0.024688720703125,
-0.004096... |
rsilg/dqn-SpaceInvadersNoFrameskip-v4 | 2023-07-05T19:40:58.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | rsilg | null | null | rsilg/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-07-05T19:40:29 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 541.50 +/- 118.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rsilg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,750 | [
[
-0.043243408203125,
-0.039642333984375,
0.0200347900390625,
0.0250091552734375,
-0.01148223876953125,
-0.0178070068359375,
0.01023101806640625,
-0.012939453125,
0.012115478515625,
0.0222625732421875,
-0.07147216796875,
-0.0355224609375,
-0.024688720703125,
-... |
papahawk/gpt2-1.5b | 2023-07-06T11:19:11.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"pyTtorch",
"tensorflow",
"en",
"dataset:gpt-2-output-dataset",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | papahawk | null | null | papahawk/gpt2-1.5b | 0 | 2 | transformers | 2023-07-05T22:17:24 | ---
language:
- en
tags:
- text-generation
- pyTtorch
- tensorflow
- transformers
datasets:
- gpt-2-output-dataset
license: mit
---
<h1 style='text-align: center '>GPT2-1.5b LLM</h1>
<h2 style='text-align: center '><em>Fork of OpenAI/GPT2-1.5b</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# gpt2-1.5b
Code and models from the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf).
You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).
We have also [released a dataset](https://github.com/openai/gpt-2-output-dataset) for researchers to study their behaviors.
<sup>*</sup> *Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.*
## Usage
This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2.
For basic information, see our [model card](./model_card.md).
### Some caveats
- GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.
- The dataset our GPT-2 models were trained on contains many texts with [biases](https://twitter.com/TomerUllman/status/1101485289720242177) and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well.
- To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.
### Work with us
Please [let us know](mailto:languagequestions@openai.com) if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations
## Development
See [DEVELOPERS.md](./DEVELOPERS.md)
## Contributors
See [CONTRIBUTORS.md](./CONTRIBUTORS.md)
## Citation
Please use the following bibtex entry:
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
## Future work
We may release code for evaluating the models on various benchmarks.
We are still considering release of the larger models.
## License
[Modified MIT](./LICENSE) | 3,216 | [
[
-0.0120086669921875,
-0.058502197265625,
0.035614013671875,
0.0033740997314453125,
-0.0251007080078125,
-0.0271453857421875,
-0.020721435546875,
-0.048614501953125,
-0.0208282470703125,
0.0179595947265625,
-0.0255889892578125,
-0.01134490966796875,
-0.0464172363... |
nkpz/open_llama_7b_qlora_uncensored-gptq | 2023-07-06T08:47:29.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | nkpz | null | null | nkpz/open_llama_7b_qlora_uncensored-gptq | 0 | 2 | transformers | 2023-07-06T08:32:58 | ---
license: apache-2.0
---
4-bit quantized files for [georgesung/open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored)
Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --save_safetensors /my/output/file.safetensors | 398 | [
[
-0.0008578300476074219,
-0.0305328369140625,
0.042999267578125,
0.0338134765625,
-0.05548095703125,
-0.003688812255859375,
0.0298309326171875,
-0.0095367431640625,
0.003963470458984375,
0.04241943359375,
-0.0177154541015625,
-0.05718994140625,
-0.032012939453125... |
lizhuang144/flan-t5-large-factual-sg | 2023-08-31T05:32:28.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | lizhuang144 | null | null | lizhuang144/flan-t5-large-factual-sg | 0 | 2 | transformers | 2023-07-06T09:54:24 | Tokenizer is flan-t5-large tokenizer
See 'https://github.com/zhuang-li/FACTUAL/blob/main/README.md' for details. | 113 | [
[
-0.01148223876953125,
-0.043670654296875,
0.0229034423828125,
0.024871826171875,
-0.0256195068359375,
0.01654052734375,
-0.0257110595703125,
-0.0252227783203125,
0.039031982421875,
0.0242156982421875,
-0.0238189697265625,
-0.041748046875,
-0.03485107421875,
... |
jordyvl/vit-base_tobacco | 2023-07-06T23:51:02.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | jordyvl | null | null | jordyvl/vit-base_tobacco | 0 | 2 | transformers | 2023-07-06T13:12:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
- Accuracy: 0.815
- Brier Loss: 0.3076
- Nll: 1.1877
- F1 Micro: 0.815
- F1 Macro: 0.7942
- Ece: 0.2072
- Aurc: 0.0734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 6 | 2.3082 | 0.085 | 0.9012 | 6.2672 | 0.085 | 0.0735 | 0.1625 | 0.9316 |
| No log | 1.96 | 12 | 2.2872 | 0.14 | 0.8970 | 4.8533 | 0.14 | 0.0885 | 0.1958 | 0.8912 |
| No log | 2.96 | 18 | 2.2562 | 0.225 | 0.8906 | 4.5559 | 0.225 | 0.1319 | 0.2527 | 0.8101 |
| No log | 3.96 | 24 | 2.2107 | 0.265 | 0.8808 | 4.3151 | 0.265 | 0.1614 | 0.2710 | 0.6990 |
| No log | 4.96 | 30 | 2.1433 | 0.3 | 0.8654 | 4.1825 | 0.3 | 0.1615 | 0.2943 | 0.6102 |
| No log | 5.96 | 36 | 2.0764 | 0.325 | 0.8493 | 3.6715 | 0.325 | 0.1696 | 0.3160 | 0.4502 |
| No log | 6.96 | 42 | 2.0012 | 0.375 | 0.8287 | 3.5534 | 0.375 | 0.1901 | 0.3542 | 0.3791 |
| No log | 7.96 | 48 | 1.9197 | 0.41 | 0.8041 | 3.3582 | 0.41 | 0.2136 | 0.3528 | 0.3342 |
| No log | 8.96 | 54 | 1.8379 | 0.45 | 0.7767 | 3.1997 | 0.45 | 0.2279 | 0.3709 | 0.2872 |
| No log | 9.96 | 60 | 1.7538 | 0.535 | 0.7475 | 2.9586 | 0.535 | 0.3755 | 0.4024 | 0.2508 |
| No log | 10.96 | 66 | 1.6634 | 0.57 | 0.7132 | 2.6969 | 0.57 | 0.4025 | 0.4182 | 0.2183 |
| No log | 11.96 | 72 | 1.5952 | 0.61 | 0.6842 | 2.4519 | 0.61 | 0.4427 | 0.4153 | 0.1882 |
| No log | 12.96 | 78 | 1.5205 | 0.655 | 0.6554 | 1.9703 | 0.655 | 0.5306 | 0.4572 | 0.1651 |
| No log | 13.96 | 84 | 1.4566 | 0.67 | 0.6308 | 1.7832 | 0.67 | 0.5458 | 0.4240 | 0.1514 |
| No log | 14.96 | 90 | 1.4009 | 0.685 | 0.6074 | 1.8217 | 0.685 | 0.5641 | 0.4221 | 0.1406 |
| No log | 15.96 | 96 | 1.3520 | 0.7 | 0.5866 | 1.6223 | 0.7 | 0.5896 | 0.4107 | 0.1304 |
| No log | 16.96 | 102 | 1.3220 | 0.7 | 0.5741 | 1.4452 | 0.7 | 0.5865 | 0.4029 | 0.1225 |
| No log | 17.96 | 108 | 1.2764 | 0.705 | 0.5522 | 1.4534 | 0.705 | 0.6076 | 0.3805 | 0.1269 |
| No log | 18.96 | 114 | 1.2448 | 0.72 | 0.5378 | 1.4843 | 0.72 | 0.6321 | 0.3724 | 0.1193 |
| No log | 19.96 | 120 | 1.2049 | 0.74 | 0.5210 | 1.2527 | 0.74 | 0.6471 | 0.3947 | 0.1039 |
| No log | 20.96 | 126 | 1.1712 | 0.74 | 0.5057 | 1.1657 | 0.74 | 0.6464 | 0.3833 | 0.0955 |
| No log | 21.96 | 132 | 1.1453 | 0.735 | 0.4936 | 1.0277 | 0.735 | 0.6597 | 0.3628 | 0.1015 |
| No log | 22.96 | 138 | 1.1094 | 0.745 | 0.4771 | 1.0003 | 0.745 | 0.6667 | 0.3841 | 0.0938 |
| No log | 23.96 | 144 | 1.0803 | 0.75 | 0.4628 | 1.0334 | 0.75 | 0.6972 | 0.3490 | 0.0891 |
| No log | 24.96 | 150 | 1.0658 | 0.755 | 0.4559 | 1.0092 | 0.755 | 0.6937 | 0.3536 | 0.0925 |
| No log | 25.96 | 156 | 1.0345 | 0.765 | 0.4423 | 0.9971 | 0.765 | 0.7356 | 0.3661 | 0.0852 |
| No log | 26.96 | 162 | 1.0133 | 0.76 | 0.4323 | 0.9206 | 0.76 | 0.7302 | 0.3343 | 0.0791 |
| No log | 27.96 | 168 | 0.9927 | 0.775 | 0.4225 | 0.9015 | 0.775 | 0.7433 | 0.3457 | 0.0794 |
| No log | 28.96 | 174 | 0.9789 | 0.765 | 0.4152 | 0.8946 | 0.765 | 0.7282 | 0.3337 | 0.0818 |
| No log | 29.96 | 180 | 0.9509 | 0.78 | 0.4025 | 0.9323 | 0.78 | 0.7565 | 0.3135 | 0.0733 |
| No log | 30.96 | 186 | 0.9388 | 0.79 | 0.3968 | 0.8616 | 0.79 | 0.7642 | 0.3353 | 0.0694 |
| No log | 31.96 | 192 | 0.9316 | 0.78 | 0.3927 | 0.8636 | 0.78 | 0.7588 | 0.3426 | 0.0739 |
| No log | 32.96 | 198 | 0.9197 | 0.79 | 0.3876 | 0.8581 | 0.79 | 0.7656 | 0.3042 | 0.0800 |
| No log | 33.96 | 204 | 0.9020 | 0.775 | 0.3792 | 0.8458 | 0.775 | 0.7543 | 0.2872 | 0.0744 |
| No log | 34.96 | 210 | 0.8833 | 0.785 | 0.3694 | 0.8288 | 0.785 | 0.7619 | 0.3305 | 0.0663 |
| No log | 35.96 | 216 | 0.8684 | 0.795 | 0.3624 | 0.8462 | 0.795 | 0.7779 | 0.3184 | 0.0690 |
| No log | 36.96 | 222 | 0.8608 | 0.79 | 0.3584 | 0.8860 | 0.79 | 0.7707 | 0.2790 | 0.0709 |
| No log | 37.96 | 228 | 0.8586 | 0.79 | 0.3587 | 0.8954 | 0.79 | 0.7724 | 0.3153 | 0.0754 |
| No log | 38.96 | 234 | 0.8470 | 0.79 | 0.3515 | 0.8822 | 0.79 | 0.7684 | 0.3075 | 0.0726 |
| No log | 39.96 | 240 | 0.8288 | 0.79 | 0.3434 | 0.8192 | 0.79 | 0.7700 | 0.2700 | 0.0648 |
| No log | 40.96 | 246 | 0.8255 | 0.8 | 0.3426 | 0.8191 | 0.8000 | 0.7808 | 0.2760 | 0.0727 |
| No log | 41.96 | 252 | 0.8247 | 0.8 | 0.3411 | 0.8876 | 0.8000 | 0.7737 | 0.2903 | 0.0701 |
| No log | 42.96 | 258 | 0.8196 | 0.8 | 0.3389 | 0.8841 | 0.8000 | 0.7786 | 0.2768 | 0.0727 |
| No log | 43.96 | 264 | 0.8118 | 0.805 | 0.3351 | 0.9510 | 0.805 | 0.7806 | 0.2620 | 0.0685 |
| No log | 44.96 | 270 | 0.8127 | 0.795 | 0.3352 | 1.0119 | 0.795 | 0.7705 | 0.2650 | 0.0707 |
| No log | 45.96 | 276 | 0.7968 | 0.8 | 0.3285 | 1.0041 | 0.8000 | 0.7788 | 0.2734 | 0.0665 |
| No log | 46.96 | 282 | 0.7946 | 0.81 | 0.3274 | 1.0647 | 0.81 | 0.7921 | 0.2765 | 0.0703 |
| No log | 47.96 | 288 | 0.7996 | 0.805 | 0.3298 | 1.0108 | 0.805 | 0.7867 | 0.2772 | 0.0714 |
| No log | 48.96 | 294 | 0.7971 | 0.805 | 0.3283 | 1.0728 | 0.805 | 0.7816 | 0.2756 | 0.0732 |
| No log | 49.96 | 300 | 0.7950 | 0.8 | 0.3278 | 1.0694 | 0.8000 | 0.7758 | 0.2540 | 0.0750 |
| No log | 50.96 | 306 | 0.7826 | 0.8 | 0.3222 | 1.0211 | 0.8000 | 0.7784 | 0.2596 | 0.0643 |
| No log | 51.96 | 312 | 0.7933 | 0.795 | 0.3273 | 1.0680 | 0.795 | 0.7712 | 0.2619 | 0.0764 |
| No log | 52.96 | 318 | 0.7883 | 0.805 | 0.3247 | 1.0730 | 0.805 | 0.7834 | 0.2426 | 0.0712 |
| No log | 53.96 | 324 | 0.7811 | 0.815 | 0.3219 | 1.0623 | 0.815 | 0.7913 | 0.2259 | 0.0716 |
| No log | 54.96 | 330 | 0.7784 | 0.815 | 0.3203 | 1.0657 | 0.815 | 0.7917 | 0.2797 | 0.0690 |
| No log | 55.96 | 336 | 0.7827 | 0.81 | 0.3219 | 1.0770 | 0.81 | 0.7885 | 0.2491 | 0.0752 |
| No log | 56.96 | 342 | 0.7701 | 0.815 | 0.3166 | 1.0614 | 0.815 | 0.7913 | 0.2664 | 0.0689 |
| No log | 57.96 | 348 | 0.7748 | 0.815 | 0.3187 | 1.0699 | 0.815 | 0.7913 | 0.2487 | 0.0722 |
| No log | 58.96 | 354 | 0.7669 | 0.815 | 0.3155 | 1.0607 | 0.815 | 0.7919 | 0.2482 | 0.0685 |
| No log | 59.96 | 360 | 0.7721 | 0.81 | 0.3180 | 1.0746 | 0.81 | 0.7859 | 0.2385 | 0.0730 |
| No log | 60.96 | 366 | 0.7645 | 0.815 | 0.3145 | 1.0650 | 0.815 | 0.7913 | 0.2468 | 0.0688 |
| No log | 61.96 | 372 | 0.7672 | 0.815 | 0.3157 | 1.0782 | 0.815 | 0.7913 | 0.2228 | 0.0728 |
| No log | 62.96 | 378 | 0.7625 | 0.82 | 0.3139 | 1.0673 | 0.82 | 0.8025 | 0.2323 | 0.0688 |
| No log | 63.96 | 384 | 0.7627 | 0.81 | 0.3144 | 1.1893 | 0.81 | 0.7892 | 0.2236 | 0.0710 |
| No log | 64.96 | 390 | 0.7629 | 0.815 | 0.3141 | 1.1934 | 0.815 | 0.7972 | 0.2277 | 0.0707 |
| No log | 65.96 | 396 | 0.7569 | 0.81 | 0.3118 | 1.1003 | 0.81 | 0.7866 | 0.2577 | 0.0696 |
| No log | 66.96 | 402 | 0.7619 | 0.815 | 0.3136 | 1.1365 | 0.815 | 0.7919 | 0.2562 | 0.0732 |
| No log | 67.96 | 408 | 0.7565 | 0.815 | 0.3114 | 1.1325 | 0.815 | 0.7919 | 0.2467 | 0.0694 |
| No log | 68.96 | 414 | 0.7558 | 0.815 | 0.3117 | 1.1895 | 0.815 | 0.7972 | 0.2453 | 0.0705 |
| No log | 69.96 | 420 | 0.7550 | 0.815 | 0.3111 | 1.1924 | 0.815 | 0.7972 | 0.2107 | 0.0709 |
| No log | 70.96 | 426 | 0.7573 | 0.805 | 0.3123 | 1.1886 | 0.805 | 0.7795 | 0.2476 | 0.0737 |
| No log | 71.96 | 432 | 0.7521 | 0.81 | 0.3099 | 1.1911 | 0.81 | 0.7866 | 0.2117 | 0.0698 |
| No log | 72.96 | 438 | 0.7542 | 0.81 | 0.3112 | 1.1878 | 0.81 | 0.7827 | 0.2332 | 0.0726 |
| No log | 73.96 | 444 | 0.7509 | 0.815 | 0.3096 | 1.1880 | 0.815 | 0.7899 | 0.2364 | 0.0709 |
| No log | 74.96 | 450 | 0.7526 | 0.81 | 0.3105 | 1.1889 | 0.81 | 0.7827 | 0.2453 | 0.0724 |
| No log | 75.96 | 456 | 0.7488 | 0.81 | 0.3090 | 1.1869 | 0.81 | 0.7827 | 0.2285 | 0.0699 |
| No log | 76.96 | 462 | 0.7506 | 0.815 | 0.3097 | 1.1901 | 0.815 | 0.7934 | 0.2547 | 0.0721 |
| No log | 77.96 | 468 | 0.7505 | 0.81 | 0.3098 | 1.1876 | 0.81 | 0.7827 | 0.2110 | 0.0724 |
| No log | 78.96 | 474 | 0.7487 | 0.815 | 0.3089 | 1.1885 | 0.815 | 0.7934 | 0.2319 | 0.0715 |
| No log | 79.96 | 480 | 0.7472 | 0.81 | 0.3083 | 1.1877 | 0.81 | 0.7827 | 0.2310 | 0.0714 |
| No log | 80.96 | 486 | 0.7494 | 0.81 | 0.3094 | 1.1877 | 0.81 | 0.7827 | 0.2462 | 0.0738 |
| No log | 81.96 | 492 | 0.7466 | 0.815 | 0.3082 | 1.1888 | 0.815 | 0.7922 | 0.2181 | 0.0709 |
| No log | 82.96 | 498 | 0.7467 | 0.81 | 0.3083 | 1.1874 | 0.81 | 0.7827 | 0.2454 | 0.0714 |
| 0.7129 | 83.96 | 504 | 0.7479 | 0.815 | 0.3088 | 1.1888 | 0.815 | 0.7922 | 0.2272 | 0.0741 |
| 0.7129 | 84.96 | 510 | 0.7456 | 0.81 | 0.3080 | 1.1853 | 0.81 | 0.7847 | 0.2358 | 0.0719 |
| 0.7129 | 85.96 | 516 | 0.7465 | 0.815 | 0.3082 | 1.1908 | 0.815 | 0.7922 | 0.2322 | 0.0721 |
| 0.7129 | 86.96 | 522 | 0.7454 | 0.805 | 0.3081 | 1.1848 | 0.805 | 0.7819 | 0.2262 | 0.0719 |
| 0.7129 | 87.96 | 528 | 0.7471 | 0.815 | 0.3086 | 1.1894 | 0.815 | 0.7922 | 0.2351 | 0.0741 |
| 0.7129 | 88.96 | 534 | 0.7459 | 0.815 | 0.3082 | 1.1885 | 0.815 | 0.7922 | 0.2159 | 0.0726 |
| 0.7129 | 89.96 | 540 | 0.7435 | 0.815 | 0.3072 | 1.1861 | 0.815 | 0.7922 | 0.2291 | 0.0712 |
| 0.7129 | 90.96 | 546 | 0.7454 | 0.81 | 0.3080 | 1.1876 | 0.81 | 0.7847 | 0.2180 | 0.0733 |
| 0.7129 | 91.96 | 552 | 0.7461 | 0.815 | 0.3083 | 1.1883 | 0.815 | 0.7942 | 0.2308 | 0.0743 |
| 0.7129 | 92.96 | 558 | 0.7451 | 0.815 | 0.3079 | 1.1883 | 0.815 | 0.7922 | 0.2330 | 0.0734 |
| 0.7129 | 93.96 | 564 | 0.7434 | 0.815 | 0.3073 | 1.1863 | 0.815 | 0.7942 | 0.2217 | 0.0720 |
| 0.7129 | 94.96 | 570 | 0.7446 | 0.815 | 0.3077 | 1.1882 | 0.815 | 0.7942 | 0.2400 | 0.0731 |
| 0.7129 | 95.96 | 576 | 0.7450 | 0.815 | 0.3079 | 1.1882 | 0.815 | 0.7942 | 0.2144 | 0.0735 |
| 0.7129 | 96.96 | 582 | 0.7440 | 0.815 | 0.3075 | 1.1871 | 0.815 | 0.7942 | 0.2348 | 0.0731 |
| 0.7129 | 97.96 | 588 | 0.7441 | 0.815 | 0.3076 | 1.1876 | 0.815 | 0.7942 | 0.2225 | 0.0732 |
| 0.7129 | 98.96 | 594 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 |
| 0.7129 | 99.96 | 600 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
| 14,017 | [
[
-0.0303955078125,
-0.039581298828125,
0.017059326171875,
0.004985809326171875,
-0.00676727294921875,
0.00458526611328125,
0.00543212890625,
0.0017805099487304688,
0.06256103515625,
0.0275421142578125,
-0.0343017578125,
-0.05413818359375,
-0.05157470703125,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.