id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GraydientPlatformAPI/animerge24 | GraydientPlatformAPI | 2023-11-29T03:10:10Z | 6 | 0 | null | [
"diffusers",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-11-29T03:10:10Z | 2023-11-29T03:01:17.000Z | null | null | ---
license: openrail
---
| null | diffusers | null | null | null | null | null | null | null | null | null | null | GraydientPlatformAPI/animerge24 | [
-0.1285340040922165,
-0.1861676573753357,
0.6529127955436707,
0.49436259269714355,
-0.19319328665733337,
0.23607435822486877,
0.36072009801864624,
0.05056355893611908,
0.579365611076355,
0.7400140166282654,
-0.6508103609085083,
-0.23783960938453674,
-0.7102246284484863,
-0.0478256717324256... |
GraydientPlatformAPI/lullaby | GraydientPlatformAPI | 2023-11-29T03:41:31Z | 6 | 0 | null | [
"diffusers",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-11-29T03:41:31Z | 2023-11-29T03:03:32.000Z | null | null | ---
license: openrail
---
| null | diffusers | null | null | null | null | null | null | null | null | null | null | GraydientPlatformAPI/lullaby | [
-0.1285340040922165,
-0.1861676573753357,
0.6529127955436707,
0.49436259269714355,
-0.19319328665733337,
0.23607435822486877,
0.36072009801864624,
0.05056355893611908,
0.579365611076355,
0.7400140166282654,
-0.6508103609085083,
-0.23783960938453674,
-0.7102246284484863,
-0.0478256717324256... |
SUSTech/SUS-Chat-34B | SUSTech | 2023-11-29T15:08:41Z | 6 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:08:41Z | 2023-11-29T09:07:53.000Z | null | null | ---
license: other
license_name: yi-license
license_link: LICENSE
widget:
- example_title: Yi-34B-Chat
text: hi
output:
text: ' Hello! How can I assist you today?'
- example_title: Yi-34B
text: >-
There's a place where time stands still. A place of breath taking wonder,
but also
output:
text: >2-
an eerie sense that something is just not right…
Between the two worlds lies The Forgotten Kingdom - home to creatures
long since thought extinct and ancient magic so strong it defies belief!
Only here can you find what has been lost for centuries: An Elixir Of
Life which will restore youth and vitality if only those who seek its
power are brave enough to face up against all manner of dangers lurking
in this mysterious land! But beware; some say there may even exist
powerful entities beyond our comprehension whose intentions towards
humanity remain unclear at best ---- they might want nothing more than
destruction itself rather then anything else from their quest after
immortality (and maybe someone should tell them about modern medicine)?
In any event though – one thing remains true regardless : whether or not
success comes easy depends entirely upon how much effort we put into
conquering whatever challenges lie ahead along with having faith deep
down inside ourselves too ;) So let’s get started now shall We?
pipeline_tag: text-generation
---
.
## Introduction
**SUS-Chat** is powered by SUSTech x IDEA-CCNL, based on `01-ai/Yi-34B`
## News
<details open>
<summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary>
This release contains two chat models based on previous released base models, two 8-bits models quantized by GPTQ, two 4-bits models quantized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [HuggingFace](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details open>
<summary>🔔 <b>2023/11/23</b>: The Yi Series Models Community License Agreement is updated to v2.1.</summary>
</details>
<details>
<summary>🔥 <b>2023/11/08</b>: Invited test of Yi-34B chat model.</summary>
Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
This release contains two base models with the same parameter sizes of previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>🎯 <b>2023/11/02</b>: The base model of <code>Yi-6B</code> and <code>Yi-34B</code>.</summary>
The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
## Model Performance
### Base Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | SUSTech/SUS-Chat-34B | [
-0.5408896803855896,
-0.6268903017044067,
0.21486978232860565,
0.17465712130069733,
-0.3167197108268738,
-0.05131114646792412,
-0.009261132217943668,
-0.5303070545196533,
0.2855195105075836,
0.45661553740501404,
-0.7608141303062439,
-0.58822101354599,
-0.6102845072746277,
0.023273594677448... |
WebraftAI/synapsellm-7b-mistral-v0.3-preview | WebraftAI | 2023-11-29T13:01:24Z | 6 | 0 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T13:01:24Z | 2023-11-29T10:32:26.000Z | null | null | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- code
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 16
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 100
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 409k rows comprising of 140k General Code, 143k GPT-3.5 Q/A, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | WebraftAI/synapsellm-7b-mistral-v0.3-preview | [
-0.3598523437976837,
-0.5533050298690796,
-0.030305279418826103,
0.29409459233283997,
-0.17733590304851532,
-0.44143402576446533,
-0.24223479628562927,
-0.3933267891407013,
0.016169225797057152,
0.5009297728538513,
-0.5740160346031189,
-0.46931400895118713,
-0.538318395614624,
-0.036964651... |
Q-bert/xglm-try | Q-bert | 2023-11-29T12:52:29Z | 6 | 0 | null | [
"transformers",
"safetensors",
"xglm",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T12:52:29Z | 2023-11-29T12:08:06.000Z | null | null | ---
license: mit
---
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Q-bert/xglm-try | [
-0.12853401899337769,
-0.18616756796836853,
0.6529130935668945,
0.49436235427856445,
-0.1931932121515274,
0.23607449233531952,
0.3607199192047119,
0.05056357383728027,
0.5793656706809998,
0.7400139570236206,
-0.6508103609085083,
-0.23783999681472778,
-0.7102250456809998,
-0.047825817018747... |
gmmarcos/ppo-LunarLander-v2 | gmmarcos | 2023-11-29T16:06:36Z | 6 | 0 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T16:06:36Z | 2023-11-29T12:26:49.000Z | null | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.08 +/- 13.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | gmmarcos/ppo-LunarLander-v2 | [
-0.0031747175380587578,
-0.39441168308258057,
0.24817690253257751,
0.3390539586544037,
-0.08787580579519272,
0.04007972404360771,
0.5000531077384949,
-0.17607855796813965,
0.2888222634792328,
0.9444828629493713,
-0.6269251108169556,
-0.512033998966217,
-0.4980958104133606,
-0.2793835103511... |
datadrill/dolphin-2.2.1-mistral-7B-GGUF-dd | datadrill | 2023-11-29T15:00:08Z | 6 | 0 | null | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:00:08Z | 2023-11-29T12:33:22.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | datadrill/dolphin-2.2.1-mistral-7B-GGUF-dd | [
-0.3227648138999939,
-0.22568483650684357,
0.8622256517410278,
0.43461519479751587,
-0.5282990336418152,
0.7012965679168701,
0.7915716767311096,
0.07618631422519684,
0.7746025323867798,
0.25632259249687195,
-0.7852814793586731,
-0.22573857009410858,
-0.910447895526886,
0.5715669393539429,
... |
nimrita/ppo-Pyramids-Training | nimrita | 2023-11-29T12:37:39Z | 6 | 0 | null | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | 2023-11-29T12:37:39Z | 2023-11-29T12:37:33.000Z | null | null | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nimrita/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| null | ml-agents | reinforcement-learning | null | null | null | null | null | null | null | null | null | nimrita/ppo-Pyramids-Training | [
-0.5714355111122131,
-0.4731568694114685,
0.00013145497359801084,
0.21427808701992035,
-0.15764014422893524,
0.18235310912132263,
0.23740041255950928,
-0.21423548460006714,
0.4678598940372467,
0.39688825607299805,
-0.5679342746734619,
-0.6904628276824951,
-0.4109533727169037,
-0.2064875066... |
Ainura/w | Ainura | 2023-11-29T13:16:56Z | 6 | 0 | null | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | 2023-11-29T13:16:56Z | 2023-11-29T13:11:05.000Z | null | null | Entry not found | null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Ainura/w | [
-0.32276463508605957,
-0.2256849706172943,
0.8622266054153442,
0.4346153736114502,
-0.5282987952232361,
0.7012974619865417,
0.7915722131729126,
0.07618652284145355,
0.7746030688285828,
0.2563217282295227,
-0.7852814793586731,
-0.22573867440223694,
-0.9104479551315308,
0.571567177772522,
... |
Agreus/KOlivia-distilbert | Agreus | 2023-11-29T14:47:41Z | 6 | 0 | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T14:47:41Z | 2023-11-29T14:47:00.000Z | null | null | ---
license: apache-2.0
---
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | Agreus/KOlivia-distilbert | [
-0.12853312492370605,
-0.18616832792758942,
0.6529129147529602,
0.494362473487854,
-0.19319364428520203,
0.23607414960861206,
0.36071962118148804,
0.05056367814540863,
0.5793655514717102,
0.7400145530700684,
-0.6508100032806396,
-0.237839937210083,
-0.7102250456809998,
-0.0478254035115242,... |
rodrigomoreirasilva/bertopic_lai_recursos_CGU | rodrigomoreirasilva | 2023-11-29T19:53:21Z | 6 | 0 | null | [
"bertopic",
"text-classification",
"pt",
"region:us"
] | 2023-11-29T19:53:21Z | 2023-11-29T15:40:55.000Z | null | null | ---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
language:
- pt
---
# bertopic_lai_recursos_CGU
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("rodrigomoreirasilva/bertopic_lai_recursos_CGU")
topic_model.get_topic_info()
# Instructions for processing the text for input into the model. (Only in Portuguese)
#Função para tratar o texto antes do calculo das embeddings
def pre_processamento(texto):
if texto == None: #verifica se o texto é nulo
texto = ' '
return texto
texto = texto.replace('\n',' ').lower() #retira nova linha / coloca em minúscula
texto_sem_espacos_excessivos = re.sub(r'\s+', ' ', texto) #retira espaços excessivos
texto = texto_sem_espacos_excessivos.strip()
return texto
exemplo = "Este é um teste do modelo"
exemplo_tratado = pre_processamento(exemplo)
topic, prob = topic_model.transform(exemplo_tratado)
```
## Topic overview
* Number of topics: 309
* Number of training documents: 17768
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | informação - processo - federal - edital - 2018 | 10 | -1_informação_processo_federal_edital |
| 0 | janeiro 2016 - prefixo - energia elétrica - 60 banco - conta | 7688 | 0_janeiro 2016_prefixo_energia elétrica_60 banco |
| 1 | dias úteis - destinatário - ect - ouvidoria - nome matrícula | 514 | 1_dias úteis_destinatário_ect_ouvidoria |
| 2 | passageiros - aeronaves - viagens - jair bolsonaro - custo | 148 | 2_passageiros_aeronaves_viagens_jair bolsonaro |
| 3 | 11 parágrafo - 527 2011 - fornecidas requeiro - 5º 12 - atenção peço | 148 | 3_11 parágrafo_527 2011_fornecidas requeiro_5º 12 |
| 4 | íntegra processo - sei - 25000 - informação pública - referente processo | 143 | 4_íntegra processo_sei_25000_informação pública |
| 5 | bolsonaro - presidência república - roberto nascimento - nascimento silva - 104 | 142 | 5_bolsonaro_presidência república_roberto nascimento_nascimento silva |
| 6 | sanção - proprietário - 000 000 - contendo seguintes - número refere | 122 | 6_sanção_proprietário_000 000_contendo seguintes |
| 7 | assembleia - deu origem - endereço rua - informação considerando - forneça cópia | 114 | 7_assembleia_deu origem_endereço rua_informação considerando |
| 8 | reg - disponibilizar - mapa - cópia relatório - avaliação | 111 | 8_reg_disponibilizar_mapa_cópia relatório |
| 9 | integral processo - cópia integral - cópia processo - 2013 - vistas | 106 | 9_integral processo_cópia integral_cópia processo_2013 |
| 10 | auxílio - dataprev - negado - contestação - 600 | 103 | 10_auxílio_dataprev_negado_contestação |
| 11 | cnpj - sc - 2008 - receita federal - fazenda | 102 | 11_cnpj_sc_2008_receita federal |
| 12 | denúncia - anexa - 11 2012 - providências tomadas - cópia íntegra | 98 | 12_denúncia_anexa_11 2012_providências tomadas |
| 13 | entrada saída - planalto - registros - janeiro 2019 - bolsonaro | 94 | 13_entrada saída_planalto_registros_janeiro 2019 |
| 14 | serpro - afastamento - cópia processo - digital - 2015 | 91 | 14_serpro_afastamento_cópia processo_digital |
| 15 | eduardo - ministério saúde - 2011 informação - propostas - farmanguinhos | 88 | 15_eduardo_ministério saúde_2011 informação_propostas |
| 16 | aquisição - relação contratos - insumos - razão social - cópia digitalizada | 85 | 16_aquisição_relação contratos_insumos_razão social |
| 17 | aposentadoria - siape - nominal - vinculados - deverá conter | 85 | 17_aposentadoria_siape_nominal_vinculados |
| 18 | diploma - informática - rj - expedido - carga horária | 85 | 18_diploma_informática_rj_expedido |
| 19 | correio eletrônico - 818 60 - cpf 104 - seguinte - silva cpf | 79 | 19_correio eletrônico_818 60_cpf 104_seguinte |
| 20 | hotmail - agradeço atenção - aérea - formato pdf - marinha | 79 | 20_hotmail_agradeço atenção_aérea_formato pdf |
| 21 | recursos - diretoria - 2019 - rdc - incluindo | 79 | 21_recursos_diretoria_2019_rdc |
| 22 | redação - inep - ensino médio - microdados - exame | 77 | 22_redação_inep_ensino médio_microdados |
| 23 | receber cópia - base - informação - 040 - bombeiros | 73 | 23_receber cópia_base_informação_040 |
| 24 | conservação - concluído - medidas tomadas - rio - justiça federal | 71 | 24_conservação_concluído_medidas tomadas_rio |
| 25 | raimundo - fortaleza - barreto - oab - atendimento | 69 | 25_raimundo_fortaleza_barreto_oab |
| 26 | ect - avaliado - matricula - férias - cópias | 68 | 26_ect_avaliado_matricula_férias |
| 27 | inclua - tratar informação - 2019 ministério - ligados - deputados | 67 | 27_inclua_tratar informação_2019 ministério_ligados |
| 28 | inss - benefício - perícia - previdência social - laudo | 64 | 28_inss_benefício_perícia_previdência social |
| 29 | incorporação - consumidor - energia elétrica - redes - distribuidora | 63 | 29_incorporação_consumidor_energia elétrica_redes |
| 30 | lotação - empregados - abril 2020 - nome cargo - assessor | 63 | 30_lotação_empregados_abril 2020_nome cargo |
| 31 | exército - fundamento legal - 00 - número contrato - engenharia | 61 | 31_exército_fundamento legal_00_número contrato |
| 32 | ingresso - semestre - conceição - doutorado - tese | 61 | 32_ingresso_semestre_conceição_doutorado |
| 33 | infração - autuação - dnit - auto - notificações | 61 | 33_infração_autuação_dnit_auto |
| 34 | manifestações - pareceres - abrange - notas técnicas - íntegra processo | 61 | 34_manifestações_pareceres_abrange_notas técnicas |
| 35 | uf - série - maior - possivel - polícia federal | 59 | 35_uf_série_maior_possivel |
| 36 | informe - auditoria interna - funcionários - dependências - padrão | 57 | 36_informe_auditoria interna_funcionários_dependências |
| 37 | polo - bancário - 06 2014 - cargo técnico - campinas | 57 | 37_polo_bancário_06 2014_cargo técnico |
| 38 | imóvel - financiamento - cef - proprietários - anuência | 56 | 38_imóvel_financiamento_cef_proprietários |
| 39 | presentes - 2021 2022 - enem - área - santo | 55 | 39_presentes_2021 2022_enem_área |
| 40 | reserva - considerando - df - janeiro 2021 - tange | 54 | 40_reserva_considerando_df_janeiro 2021 |
| 41 | telecomunicações - usuário - carmo - fonseca - agência nacional | 51 | 41_telecomunicações_usuário_carmo_fonseca |
| 42 | cumprimentando - cidadã - artigo 32 - reitora - completo | 51 | 42_cumprimentando_cidadã_artigo 32_reitora |
| 43 | 1º 12 - parágrafo 1º - informações solicitadas - termos retardar - receber órgãos | 51 | 43_1º 12_parágrafo 1º_informações solicitadas_termos retardar |
| 44 | aguardo - requeiro - vistas - 010 - informações detalhadas | 50 | 44_aguardo_requeiro_vistas_010 |
| 45 | reitor - gentilmente - aguardo - 10 dez - extraordinárias | 49 | 45_reitor_gentilmente_aguardo_10 dez |
| 46 | município - auxílio - cnpj - variáveis - ibge | 49 | 46_município_auxílio_cnpj_variáveis |
| 47 | doenças - empregados - últimos 10 - demissão - cid | 49 | 47_doenças_empregados_últimos 10_demissão |
| 48 | laboratório - referente - farmanguinhos - fiocruz - anexo informação | 48 | 48_laboratório_referente_farmanguinhos_fiocruz |
| 49 | mec - educação superior - curso - 23546 - ministério educação | 48 | 49_mec_educação superior_curso_23546 |
| 50 | gru - encaminhar cópia - relatório - passagem - possíveis irregularidades | 48 | 50_gru_encaminhar cópia_relatório_passagem |
| 51 | anvisa - barbosa - encaminhada - nacional vigilância - quinta | 46 | 51_anvisa_barbosa_encaminhada_nacional vigilância |
| 52 | banco - roberto nascimento - 818 60 - 104 835 - silva cpf | 46 | 52_banco_roberto nascimento_818 60_104 835 |
| 53 | uf - lote - polícia federal - registradas - certificado | 45 | 53_uf_lote_polícia federal_registradas |
| 54 | amazônia - indígenas - ambientais - estabelecido - aumento | 45 | 54_amazônia_indígenas_ambientais_estabelecido |
| 55 | bilhões - relativamente - previdenciária - orçamentárias - arrecadação | 44 | 55_bilhões_relativamente_previdenciária_orçamentárias |
| 56 | questionário - considerações - ministério fazenda - arquivo - mre | 44 | 56_questionário_considerações_ministério fazenda_arquivo |
| 57 | anexo solicitação - regularidade - aguardo - detalhado - segue | 44 | 57_anexo solicitação_regularidade_aguardo_detalhado |
| 58 | prf - multas - 1997 - indústria - 2013 2014 | 43 | 58_prf_multas_1997_indústria |
| 59 | crime - 818 60 - 104 835 - silva cpf - roberto | 43 | 59_crime_818 60_104 835_silva cpf |
| 60 | exército - minas - importação - concorrentes - 2020 | 42 | 60_exército_minas_importação_concorrentes |
| 61 | estoque - ministério saúde - peço - vacinas - validade | 42 | 61_estoque_ministério saúde_peço_vacinas |
| 62 | petrobras - denúncias - ouvidoria geral - direitos humanos - moral | 42 | 62_petrobras_denúncias_ouvidoria geral_direitos humanos |
| 63 | gerente - bb - roberto silva - denúncia - morais | 42 | 63_gerente_bb_roberto silva_denúncia |
| 64 | representação - mre - supracitadas - embaixada - faz necessário | 41 | 64_representação_mre_supracitadas_embaixada |
| 65 | comandante - gabinete - 12 2014 - 1997 - 31 | 41 | 65_comandante_gabinete_12 2014_1997 |
| 66 | inserção - secom - campanhas - peço - compatível | 41 | 66_inserção_secom_campanhas_peço |
| 67 | totalidade - intuito - grato atenção - formato pdf - http www | 40 | 67_totalidade_intuito_grato atenção_formato pdf |
| 68 | 25351 - sigilo exceção - preceito geral - 7º 2º - direito fundamental | 40 | 68_25351_sigilo exceção_preceito geral_7º 2º |
| 69 | editais - preços - retirada - qualquer pessoa - princípio constitucional | 40 | 69_editais_preços_retirada_qualquer pessoa |
| 70 | bastos - anvisa - 25351 - farmacêutica - ativo | 40 | 70_bastos_anvisa_25351_farmacêutica |
| 71 | imóvel - cef - financiamento - caixa econômica - campinas | 39 | 71_imóvel_cef_financiamento_caixa econômica |
| 72 | assinados - casa civil - econômico - positiva - marcelo | 38 | 72_assinados_casa civil_econômico_positiva |
| 73 | ministro justiça - exceções - documento enviado - tarjas - agradeço antecipadamente | 38 | 73_ministro justiça_exceções_documento enviado_tarjas |
| 74 | gentileza disponibilizar - reg - risco - industrial - cópia | 38 | 74_gentileza disponibilizar_reg_risco_industrial |
| 75 | posse - nascimento silva - roberto - 818 60 - cpf 104 | 38 | 75_posse_nascimento silva_roberto_818 60 |
| 76 | embaixada - itamaraty - ministério relações - chave - recebidos | 38 | 76_embaixada_itamaraty_ministério relações_chave |
| 77 | linguagens - inclui - notas - matemática - 2021 2022 | 37 | 77_linguagens_inclui_notas_matemática |
| 78 | lai - contratos administrativos - ministério saúde - órgãos entidades - patrimônio público | 37 | 78_lai_contratos administrativos_ministério saúde_órgãos entidades |
| 79 | anexo solicitação - documento - aguardo respostas - 1978 - ampliação | 37 | 79_anexo solicitação_documento_aguardo respostas_1978 |
| 80 | roberto nascimento - 818 60 - 104 835 - silva cpf - funcionário | 37 | 80_roberto nascimento_818 60_104 835_silva cpf |
| 81 | lopes - luis - funcionário - matrícula - cópia ato | 37 | 81_lopes_luis_funcionário_matrícula |
| 82 | 0001 - fundo - ribeiro - direto - susep | 36 | 82_0001_fundo_ribeiro_direto |
| 83 | aquisição - insumos - razão social - contratada - cópia digitalizada | 36 | 83_aquisição_insumos_razão social_contratada |
| 84 | mestrado - ufmg - fluminense - provas - universidade federal | 36 | 84_mestrado_ufmg_fluminense_provas |
| 85 | segurança trabalho - jornada - base informação - ministério trabalho - seguintes esclarecimentos | 36 | 85_segurança trabalho_jornada_base informação_ministério trabalho |
| 86 | cópia íntegra - instrução normativa - nascimento silva - 818 60 - cpf 104 | 36 | 86_cópia íntegra_instrução normativa_nascimento silva_818 60 |
| 87 | termos peço - nesses termos - documento anexo - ministério educação - deferimento | 35 | 87_termos peço_nesses termos_documento anexo_ministério educação |
| 88 | anac - via mail - gru - encaminhar cópia - aprovada | 35 | 88_anac_via mail_gru_encaminhar cópia |
| 89 | anvisa - resolução nº - 25351 - cópias - preços | 35 | 89_anvisa_resolução nº_25351_cópias |
| 90 | gentileza disponibilizar - aprovada - cópia digital - produto - reg | 34 | 90_gentileza disponibilizar_aprovada_cópia digital_produto |
| 91 | ementa - imposto renda - vigentes - receitas - órgão entidade | 34 | 91_ementa_imposto renda_vigentes_receitas |
| 92 | almeida - digital - solicitamos cópia - moraes - processo número | 34 | 92_almeida_digital_solicitamos cópia_moraes |
| 93 | 01 2021 - objetiva - ufpe - bruto - disciplinar | 34 | 93_01 2021_objetiva_ufpe_bruto |
| 94 | códigos - vagas - instituto federal - professor - cargo técnico | 34 | 94_códigos_vagas_instituto federal_professor |
| 95 | grato atenção - intuito - concurso público - formato pdf - peço gentileza | 34 | 95_grato atenção_intuito_concurso público_formato pdf |
| 96 | anexado - recentes - anexar - médicos - laudos | 33 | 96_anexado_recentes_anexar_médicos |
| 97 | roraima - superintendência - notas técnicas - outros documentos - polícia federal | 33 | 97_roraima_superintendência_notas técnicas_outros documentos |
| 98 | 19 - pesquisa - fiocruz - controle social - cep | 33 | 98_19_pesquisa_fiocruz_controle social |
| 99 | plano cargos - salários - drogas - sul - eficácia | 32 | 99_plano cargos_salários_drogas_sul |
| 100 | designação - exterior - portaria - complementar nº - janeiro 2023 | 32 | 100_designação_exterior_portaria_complementar nº |
| 101 | anderson - natal - pereira - julho 2012 - telégrafos | 31 | 101_anderson_natal_pereira_julho 2012 |
| 102 | cumprimentando - lai - inciso - requeiro - contratada | 31 | 102_cumprimentando_lai_inciso_requeiro |
| 103 | 0001 - estabelecimento - cnpj nº - retorno - vem respeitosamente | 31 | 103_0001_estabelecimento_cnpj nº_retorno |
| 104 | atas reuniões - conselho administração - 2013 2014 - aprovadas - mérito | 30 | 104_atas reuniões_conselho administração_2013 2014_aprovadas |
| 105 | abin - inteligência - gsi - justiça federal - amparo | 30 | 105_abin_inteligência_gsi_justiça federal |
| 106 | documento anexo - documento - anexo - texto - | 30 | 106_documento anexo_documento_anexo_texto |
| 107 | madeira - gentileza - disponibilizar cópia - certificado - anvisa | 30 | 107_madeira_gentileza_disponibilizar cópia_certificado |
| 108 | dnit - tramitação - referido processo - processo administrativo - requer informações | 30 | 108_dnit_tramitação_referido processo_processo administrativo |
| 109 | cef - terceirizados - quantos - cargo técnico - caixa econômica | 30 | 109_cef_terceirizados_quantos_cargo técnico |
| 110 | infraero - conselho administração - aeroportos - pacheco - 06 2015 | 30 | 110_infraero_conselho administração_aeroportos_pacheco |
| 111 | científica - globo - roberto nascimento - nascimento silva - pessoas | 29 | 111_científica_globo_roberto nascimento_nascimento silva |
| 112 | residência - raça - campus - curso graduação - ingresso | 29 | 112_residência_raça_campus_curso graduação |
| 113 | planilha - alagoas - cadastral - descrição - município uf | 29 | 113_planilha_alagoas_cadastral_descrição |
| 114 | ofício - sendo tomadas - solicitamos informações - xlsx - tipos | 28 | 114_ofício_sendo tomadas_solicitamos informações_xlsx |
| 115 | jair bolsonaro - ata - reuniões - peço - ministério defesa | 28 | 115_jair bolsonaro_ata_reuniões_peço |
| 116 | terceirizados - quantos funcionários - mão obra - hoje - petrobras | 28 | 116_terceirizados_quantos funcionários_mão obra_hoje |
| 117 | 2013 - discentes - inciso - 8112 - descumprimento | 28 | 117_2013_discentes_inciso_8112 |
| 118 | inss - certidão - mês - deferido - inscrição | 28 | 118_inss_certidão_mês_deferido |
| 119 | salas - ciências - regimento interno - comissões - universitário | 28 | 119_salas_ciências_regimento interno_comissões |
| 120 | aeronáutica - marques - constituição república - nº 784 - xxxiv | 27 | 120_aeronáutica_marques_constituição república_nº 784 |
| 121 | requisitamos informações - digitalizado - respondido separadamente - facilitar compreensão - abaixo referentes | 27 | 121_requisitamos informações_digitalizado_respondido separadamente_facilitar compreensão |
| 122 | laudos - ibama - ambiental - 2011 seguintes - 900 | 27 | 122_laudos_ibama_ambiental_2011 seguintes |
| 123 | nome empresa - reg - gentileza informar - longo prazo - embasaram | 27 | 123_nome empresa_reg_gentileza informar_longo prazo |
| 124 | gab - situação atual - cópia anexa - requerimentos - ofício nº | 27 | 124_gab_situação atual_cópia anexa_requerimentos |
| 125 | atividades desenvolvidas - carga - atribuições - ministério saúde - seguintes informações | 27 | 125_atividades desenvolvidas_carga_atribuições_ministério saúde |
| 126 | exmo - embaixada - solicitações - atendido - brasileiro | 27 | 126_exmo_embaixada_solicitações_atendido |
| 127 | pós graduação - reitor - discriminada - quantos - rubrica | 26 | 127_pós graduação_reitor_discriminada_quantos |
| 128 | órgão entidade - compromissos - formato aberto - faltam - decreto federal | 26 | 128_órgão entidade_compromissos_formato aberto_faltam |
| 129 | parâmetros - questões - extras - referencia - discriminação | 26 | 129_parâmetros_questões_extras_referencia |
| 130 | sergio - torres - instituto - referente processo - 2012 | 26 | 130_sergio_torres_instituto_referente processo |
| 131 | reg - estudos - resíduos - resultados - curto | 26 | 131_reg_estudos_resíduos_resultados |
| 132 | projeto - software - convite - nome cargo - participação | 26 | 132_projeto_software_convite_nome cargo |
| 133 | mar - unifesp - câmara - ciências - gustavo | 26 | 133_mar_unifesp_câmara_ciências |
| 134 | inep - protegidos - 2021 2022 - mateus - microdados | 26 | 134_inep_protegidos_2021 2022_mateus |
| 135 | governança - tcu - tribunal contas - questionários - arquivos | 26 | 135_governança_tcu_tribunal contas_questionários |
| 136 | relatórios - íntegra - redes sociais - assessoria - janeiro fevereiro | 26 | 136_relatórios_íntegra_redes sociais_assessoria |
| 137 | aquisição - relação contratos - insumos - razão social - cópia digitalizada | 25 | 137_aquisição_relação contratos_insumos_razão social |
| 138 | autos processo - abin - deputado federal - martins - acusação | 25 | 138_autos processo_abin_deputado federal_martins |
| 139 | nome matrícula - roberto silva - ro - correio - 01 01 | 25 | 139_nome matrícula_roberto silva_ro_correio |
| 140 | dnit - transportes - frança - ouvidor - federal nº | 25 | 140_dnit_transportes_frança_ouvidor |
| 141 | administrativo disciplinar - tomada decisão - nº 14 - sequência - publicado dou | 25 | 141_administrativo disciplinar_tomada decisão_nº 14_sequência |
| 142 | assistente - cargos vagos - vagas - existem - quadro | 25 | 142_assistente_cargos vagos_vagas_existem |
| 143 | ibama - lgpd - 2022 - informações pessoais - permite | 24 | 143_ibama_lgpd_2022_informações pessoais |
| 144 | especificar - unidade - existentes - gov - auditoria interna | 24 | 144_especificar_unidade_existentes_gov |
| 145 | enquadrado - requerimento anexo - guia - prescrição - atendido | 24 | 145_enquadrado_requerimento anexo_guia_prescrição |
| 146 | lotados - cadastrados - ministério economia - financeiros - siape | 24 | 146_lotados_cadastrados_ministério economia_financeiros |
| 147 | anexo solicitação - documento - presente informação - exposição - avaliação desempenho | 24 | 147_anexo solicitação_documento_presente informação_exposição |
| 148 | governança - entidade - práticas - comitê - adotado | 23 | 148_governança_entidade_práticas_comitê |
| 149 | município - belo - inquérito - fraudes - memória | 23 | 149_município_belo_inquérito_fraudes |
| 150 | ocorrências - livro - ficam - agradeço atenção - infraero | 23 | 150_ocorrências_livro_ficam_agradeço atenção |
| 151 | arquivo anexo - 2000 - pdf - limite - detalhamento | 23 | 151_arquivo anexo_2000_pdf_limite |
| 152 | faixa - planta - entregues - dúvidas - município | 23 | 152_faixa_planta_entregues_dúvidas |
| 153 | construção - ata - estudantil - atender - restrita | 23 | 153_construção_ata_estudantil_atender |
| 154 | setor - esic - apoio - edu - atribuições | 23 | 154_setor_esic_apoio_edu |
| 155 | militares - 2019 2020 - marinha - obrigado - devidos | 23 | 155_militares_2019 2020_marinha_obrigado |
| 156 | general - eduardo - maio 2021 - processo disciplinar - comando exército | 22 | 156_general_eduardo_maio 2021_processo disciplinar |
| 157 | redes sociais - institucionais - palavras - quantas vezes - representam | 22 | 157_redes sociais_institucionais_palavras_quantas vezes |
| 158 | disponibilização - grupo trabalho - 2022 - diante exposto - técnica nº | 22 | 158_disponibilização_grupo trabalho_2022_diante exposto |
| 159 | cgu - auditoria interna - passíveis - definitivo - envio | 22 | 159_cgu_auditoria interna_passíveis_definitivo |
| 160 | requerimento - documento - anexo - direcionado - denatran | 22 | 160_requerimento_documento_anexo_direcionado |
| 161 | ii iii - salários - coletivos - assistente - plano cargos | 22 | 161_ii iii_salários_coletivos_assistente |
| 162 | abin - inteligência - 2014 2015 - produzidos - sigilo | 22 | 162_abin_inteligência_2014 2015_produzidos |
| 163 | informações solicitadas - 1º 12 - 2011 2013 - termos retardar - deliberadamente fornecimento | 21 | 163_informações solicitadas_1º 12_2011 2013_termos retardar |
| 164 | tributária - 86 - planilha formato - xls - informação deve | 21 | 164_tributária_86_planilha formato_xls |
| 165 | novembro 2011 - 527 18 - fundamenta - 216 constituição - amazônia | 21 | 165_novembro 2011_527 18_fundamenta_216 constituição |
| 166 | anexo - ver anexo - ver - - | 20 | 166_anexo_ver anexo_ver_ |
| 167 | cuiabá - barra - compra - reitoria - prazo resposta | 20 | 167_cuiabá_barra_compra_reitoria |
| 168 | desmatamento - implementação - amazônia - controle - quantos | 20 | 168_desmatamento_implementação_amazônia_controle |
| 169 | naval - rio janeiro - gasto - base - quantas vezes | 20 | 169_naval_rio janeiro_gasto_base |
| 170 | patrocínio - caixa econômica - vigência - assinado - federal | 20 | 170_patrocínio_caixa econômica_vigência_assinado |
| 171 | enquadrados - federativa - gerência - indígenas - ainda | 20 | 171_enquadrados_federativa_gerência_indígenas |
| 172 | informar quanto - pagou - energia elétrica - nascimento silva - 835 818 | 20 | 172_informar quanto_pagou_energia elétrica_nascimento silva |
| 173 | 25000 - documentos processo - informação 12 - integral autos - fulcro | 20 | 173_25000_documentos processo_informação 12_integral autos |
| 174 | parâmetros - lista - inep - acima - questões | 20 | 174_parâmetros_lista_inep_acima |
| 175 | aposentados - nestes termos - peço deferimento - catarina - nº 02 | 19 | 175_aposentados_nestes termos_peço deferimento_catarina |
| 176 | genéricos - fila - informação documento - 25351 - petição | 19 | 176_genéricos_fila_informação documento_25351 |
| 177 | vagas - deficiência - edital - candidato - aprovados | 19 | 177_vagas_deficiência_edital_candidato |
| 178 | suspensão - 12527 - falta - direito informações - alvo | 19 | 178_suspensão_12527_falta_direito informações |
| 179 | detalhamento - arquivo pdf - fls - segue - solicitação informações | 19 | 179_detalhamento_arquivo pdf_fls_segue |
| 180 | laboratórios - isonomia - direito defesa - ministério agricultura - abastecimento | 19 | 180_laboratórios_isonomia_direito defesa_ministério agricultura |
| 181 | sensu - curso pós - ambiental - 05 2017 - goiás | 19 | 181_sensu_curso pós_ambiental_05 2017 |
| 182 | aguardo respostas - arquivo digital - equivalentes - deferidos - pós graduação | 19 | 182_aguardo respostas_arquivo digital_equivalentes_deferidos |
| 183 | inss - pensão - prova - margem - instituições financeiras | 19 | 183_inss_pensão_prova_margem |
| 184 | cnpq - produtividade - bolsa - diretoria executiva - planilhas | 19 | 184_cnpq_produtividade_bolsa_diretoria executiva |
| 185 | obrigada - doação - serviços prestados - adriana - reuniões | 19 | 185_obrigada_doação_serviços prestados_adriana |
| 186 | itália - alemanha - salvador - tabela anexo - frança | 19 | 186_itália_alemanha_salvador_tabela anexo |
| 187 | reunião ordinária - comitê - ata - diretoria executiva - 11 2019 | 19 | 187_reunião ordinária_comitê_ata_diretoria executiva |
| 188 | rm - comando - fornecer - coordenadas - informação quanto | 19 | 188_rm_comando_fornecer_coordenadas |
| 189 | empregados - enquadrados - federativa - gerência - indígenas | 19 | 189_empregados_enquadrados_federativa_gerência |
| 190 | brito - participou - computação - peço gentileza - grato atenção | 19 | 190_brito_participou_computação_peço gentileza |
| 191 | cedidos - mapa - outros órgãos - ministério agricultura - movimentação | 18 | 191_cedidos_mapa_outros órgãos_ministério agricultura |
| 192 | produzida - avaliação desempenho - cessão - atitude - cmri | 18 | 192_produzida_avaliação desempenho_cessão_atitude |
| 193 | alagoas - farias - inss - prevenção - endereço | 18 | 193_alagoas_farias_inss_prevenção |
| 194 | ufrj - ação judicial - contestação - 57 - agu | 18 | 194_ufrj_ação judicial_contestação_57 |
| 195 | econômico - requerer seguintes - eletrobras - 06 2019 - jornal | 18 | 195_econômico_requerer seguintes_eletrobras_06 2019 |
| 196 | relatórios - auditoria interna - controle - finalização - órgãos | 18 | 196_relatórios_auditoria interna_controle_finalização |
| 197 | requerimento anexo - enquadrado - guia - sic - atenção | 18 | 197_requerimento anexo_enquadrado_guia_sic |
| 198 | autos infração - notificações - dnit - indeferimento - placas | 18 | 198_autos infração_notificações_dnit_indeferimento |
| 199 | covid - pacientes - exército brasileiro - rede - oficiais | 18 | 199_covid_pacientes_exército brasileiro_rede |
| 200 | cópias - sanitária - parágrafo único - maio 2012 - 724 16 | 18 | 200_cópias_sanitária_parágrafo único_maio 2012 |
| 201 | federais - responderam - linguagens - matemática - verdade | 18 | 201_federais_responderam_linguagens_matemática |
| 202 | marca - afins - coordenação geral - fiscalização - secretaria | 18 | 202_marca_afins_coordenação geral_fiscalização |
| 203 | manifestação - consigo - canal - recebi resposta - 23546 | 18 | 203_manifestação_consigo_canal_recebi resposta |
| 204 | matrículas - concorrência - recorte - indígenas - afirmativa | 17 | 204_matrículas_concorrência_recorte_indígenas |
| 205 | parecer técnico - gentileza disponibilizar - anvisa - reavaliação - embrapa | 17 | 205_parecer técnico_gentileza disponibilizar_anvisa_reavaliação |
| 206 | janeiro 2016 - prefixo - energia elétrica - 60 banco - conta | 17 | 206_janeiro 2016_prefixo_energia elétrica_60 banco |
| 207 | informar quanto - ms - nascimento silva - 835 818 - cpf 104 | 17 | 207_informar quanto_ms_nascimento silva_835 818 |
| 208 | inep - microdados - lgpd - ufmg - 2021 2022 | 17 | 208_inep_microdados_lgpd_ufmg |
| 209 | desclassificados - gsi - classificou - nup - anexos | 17 | 209_desclassificados_gsi_classificou_nup |
| 210 | nota técnica - rol - desclassificados - 2014 - documento | 17 | 210_nota técnica_rol_desclassificados_2014 |
| 211 | marcas - nome empresa - ativos - excel - artigo 10 | 17 | 211_marcas_nome empresa_ativos_excel |
| 212 | susep - aposentadoria - fundamento legal - 90 - vista | 17 | 212_susep_aposentadoria_fundamento legal_90 |
| 213 | admissão - aposentadoria - 12 06 - siape - tcu | 17 | 213_admissão_aposentadoria_12 06_siape |
| 214 | gov busca - cgu - deixo - http - lista contendo | 17 | 214_gov busca_cgu_deixo_http |
| 215 | novembro 2011 - 527 18 - informação nº - aeronáutica - nome cargo | 17 | 215_novembro 2011_527 18_informação nº_aeronáutica |
| 216 | 93 - notas fiscais - obrigatória - 01 01 - fazenda gov | 17 | 216_93_notas fiscais_obrigatória_01 01 |
| 217 | veículos - peço gentileza - 2007 - bndes - presente | 17 | 217_veículos_peço gentileza_2007_bndes |
| 218 | 25351 - ativo - medicamento - seguinte informação - registro | 17 | 218_25351_ativo_medicamento_seguinte informação |
| 219 | vigência - importação - fim - ministério economia - imposto | 17 | 219_vigência_importação_fim_ministério economia |
| 220 | referida - 2002 - esclarecemos - união estados - item | 17 | 220_referida_2002_esclarecemos_união estados |
| 221 | tv - claudia - brito - vieira - portal transparência | 17 | 221_tv_claudia_brito_vieira |
| 222 | punição - sindicâncias - possível enviar - item item - série histórica | 16 | 222_punição_sindicâncias_possível enviar_item item |
| 223 | ufs - intuito - grato atenção - expediente - capa capa | 16 | 223_ufs_intuito_grato atenção_expediente |
| 224 | rj - 2010 - envolvidos - integral documentos - listados abaixo | 16 | 224_rj_2010_envolvidos_integral documentos |
| 225 | ufrj - filosofia - pós graduação - junho 2013 - nomeação | 16 | 225_ufrj_filosofia_pós graduação_junho 2013 |
| 226 | dpf - desfavor - cor - rj - instauração | 16 | 226_dpf_desfavor_cor_rj |
| 227 | docente - relacionados processo - admissão - intuito - grato atenção | 16 | 227_docente_relacionados processo_admissão_intuito |
| 228 | anexo solicitação - segue - empreendimentos - autuação - deferido | 16 | 228_anexo solicitação_segue_empreendimentos_autuação |
| 229 | segue anexo - segue - requerimento - anexo - requerimento anexo | 16 | 229_segue anexo_segue_requerimento_anexo |
| 230 | publicidade eficiência - poderes união - legalidade impessoalidade - direta indireta - têm direito | 16 | 230_publicidade eficiência_poderes união_legalidade impessoalidade_direta indireta |
| 231 | ref - janeiro 2015 - pagamentos - colegiado - setembro 2020 | 16 | 231_ref_janeiro 2015_pagamentos_colegiado |
| 232 | cronograma - financeira - construção - plataforma - seguintes informações | 16 | 232_cronograma_financeira_construção_plataforma |
| 233 | ata reunião - colegiado - desejo - 30 04 - duarte | 16 | 233_ata reunião_colegiado_desejo_30 04 |
| 234 | machado - vieira - 12 12 - embaixada - artigos 10 | 16 | 234_machado_vieira_12 12_embaixada |
| 235 | 32 - programa - 10 10 - resíduos - prevenção | 16 | 235_32_programa_10 10_resíduos |
| 236 | pagos - portal transparência - respectivos valores - conste - exercícios anteriores | 15 | 236_pagos_portal transparência_respectivos valores_conste |
| 237 | informações solicitadas - condutas ilícitas - público militar - parágrafo 1º - termos retardar | 15 | 237_informações solicitadas_condutas ilícitas_público militar_parágrafo 1º |
| 238 | serpro - receita federal - rfb - evidências - arquivos | 15 | 238_serpro_receita federal_rfb_evidências |
| 239 | araujo - oliveira - souza - principalmente - alves | 15 | 239_araujo_oliveira_souza_principalmente |
| 240 | rondônia - almeida - universidade federal - impostos - ademais | 15 | 240_rondônia_almeida_universidade federal_impostos |
| 241 | empenho - notas fiscais - ltda cnpj - publicações - 0001 | 15 | 241_empenho_notas fiscais_ltda cnpj_publicações |
| 242 | alimentos - campanhas - mapa - percentual - longo prazo | 15 | 242_alimentos_campanhas_mapa_percentual |
| 243 | pad - aposentado - mte - invalidez - minas gerais | 15 | 243_pad_aposentado_mte_invalidez |
| 244 | gostaríamos - receber cópia - gerente - anexos - ltda | 15 | 244_gostaríamos_receber cópia_gerente_anexos |
| 245 | juros - correspondentes - procuração - total - bancária | 15 | 245_juros_correspondentes_procuração_total |
| 246 | sindicâncias - número data - outro órgão - independente - agradeço atenção | 15 | 246_sindicâncias_número data_outro órgão_independente |
| 247 | reitor - aguardo - anuais - 2010 2011 - anualmente | 15 | 247_reitor_aguardo_anuais_2010 2011 |
| 248 | belém - menezes - julho 2017 - filosofia - marques | 15 | 248_belém_menezes_julho 2017_filosofia |
| 249 | brasileira correios - notas fiscais - companhia - requer informação - agosto 2019 | 14 | 249_brasileira correios_notas fiscais_companhia_requer informação |
| 250 | deputado federal - ibama - verde - multas aplicadas - autos infração | 14 | 250_deputado federal_ibama_verde_multas aplicadas |
| 251 | gentileza informar - reg - empresa - 96 - composição | 14 | 251_gentileza informar_reg_empresa_96 |
| 252 | caixa econômica - fernandes - documentos comprobatórios - subsídios - amparo | 14 | 252_caixa econômica_fernandes_documentos comprobatórios_subsídios |
| 253 | banca - professora - resultado final - marinha - unb | 14 | 253_banca_professora_resultado final_marinha |
| 254 | viagem - gru - encaminhar cópia - aprovada - processo referente | 14 | 254_viagem_gru_encaminhar cópia_aprovada |
| 255 | prestam serviços - ministério relações - direta indiretamente - olá boa - relações exteriores | 14 | 255_prestam serviços_ministério relações_direta indiretamente_olá boa |
| 256 | faltas - nomes completos - ocupam - setores - junho 2021 | 14 | 256_faltas_nomes completos_ocupam_setores |
| 257 | incra - regional - flávio - irregularidades - 26 | 14 | 257_incra_regional_flávio_irregularidades |
| 258 | pdf - conselho administração - conselheiro - xlsx - convocação | 14 | 258_pdf_conselho administração_conselheiro_xlsx |
| 259 | curto - consequências - presidência república - gastos públicos - impõe | 14 | 259_curto_consequências_presidência república_gastos públicos |
| 260 | petrobras - cópia contrato - aditivos - obter - assinado | 14 | 260_petrobras_cópia contrato_aditivos_obter |
| 261 | questionário - abertos - universidades federais - projetos - tese | 13 | 261_questionário_abertos_universidades federais_projetos |
| 262 | informar número - reg - mapa - petróleo - 400 | 13 | 262_informar número_reg_mapa_petróleo |
| 263 | sp - comando aeronáutica - 07 2012 - normas - permanência | 13 | 263_sp_comando aeronáutica_07 2012_normas |
| 264 | dedicação exclusiva - reitor - atualmente - universidade - docentes | 13 | 264_dedicação exclusiva_reitor_atualmente_universidade |
| 265 | mpog - cadastrados - exercícios anteriores - públicos federais - siape | 13 | 265_mpog_cadastrados_exercícios anteriores_públicos federais |
| 266 | amazonas - hospitais - janeiro 2021 - covid - indígenas | 13 | 266_amazonas_hospitais_janeiro 2021_covid |
| 267 | setembro 2020 - portaria nº - conteúdo - pareceres técnicos - agosto | 13 | 267_setembro 2020_portaria nº_conteúdo_pareceres técnicos |
| 268 | manda - www1 folha - discriminada - reconhecido - extras | 13 | 268_manda_www1 folha_discriminada_reconhecido |
| 269 | respondida - pais - termos retardar - deliberadamente fornecimento - incorreta incompleta | 13 | 269_respondida_pais_termos retardar_deliberadamente fornecimento |
| 270 | cartão - mês mês - extrato - planilha - secretaria geral | 13 | 270_cartão_mês mês_extrato_planilha |
| 271 | enem - alunos - questões - indique - podemos | 13 | 271_enem_alunos_questões_indique |
| 272 | segue anexo - segue - seguem - anexo solicitação - anexos | 13 | 272_segue anexo_segue_seguem_anexo solicitação |
| 273 | salarial - bruto - atual - repartição - suspensão | 13 | 273_salarial_bruto_atual_repartição |
| 274 | carga horária - departamento - mar - unifesp - cópia decisão | 12 | 274_carga horária_departamento_mar_unifesp |
| 275 | meios - projetos - órgão entidade - consulado - procedimentos | 12 | 275_meios_projetos_órgão entidade_consulado |
| 276 | edital - finanças - provas - concurso público - infraestrutura | 12 | 276_edital_finanças_provas_concurso público |
| 277 | procedimentos administrativos - situações - aeronaves - brasileiras - federal ministério | 12 | 277_procedimentos administrativos_situações_aeronaves_brasileiras |
| 278 | física - livro - seletivo - grato atenção - cumprimentos | 12 | 278_física_livro_seletivo_grato atenção |
| 279 | xlsx - acórdão - aberto csv - csv ods - download | 12 | 279_xlsx_acórdão_aberto csv_csv ods |
| 280 | acusado - disciplinares - preferência - informações prestadas - relatório final | 12 | 280_acusado_disciplinares_preferência_informações prestadas |
| 281 | documento anexo - requerimento - documento - anexo - | 12 | 281_documento anexo_requerimento_documento_anexo |
| 282 | prestação contas - correios - cópias - posso - esporte | 12 | 282_prestação contas_correios_cópias_posso |
| 283 | pandemia - henrique - youtube - mão obra - corrupção | 12 | 283_pandemia_henrique_youtube_mão obra |
| 284 | arquivo anexo - vide - doc - anexo - arquivo | 12 | 284_arquivo anexo_vide_doc_anexo |
| 285 | ibama - requer informação - últimos 10 - mortes - ambientais | 12 | 285_ibama_requer informação_últimos 10_mortes |
| 286 | pensão - força - mulheres - peço lista - valor pago | 12 | 286_pensão_força_mulheres_peço lista |
| 287 | covid 19 - saude - hospitalar - vacinação - variáveis | 12 | 287_covid 19_saude_hospitalar_vacinação |
| 288 | setor - informações acerca - arquivo - questões - universitário | 12 | 288_setor_informações acerca_arquivo_questões |
| 289 | ilmo - gentilmente - enviada - abril 2013 - encaminhei | 12 | 289_ilmo_gentilmente_enviada_abril 2013 |
| 290 | infraero - encaminhar - pareceres - cópia documento - oliveira | 12 | 290_infraero_encaminhar_pareceres_cópia documento |
| 291 | metodologia - tabela anexo - 1º 12 - passagem - requeiro apontada | 12 | 291_metodologia_tabela anexo_1º 12_passagem |
| 292 | placa - denatran - amparado - planilha excel - cadastrados | 11 | 292_placa_denatran_amparado_planilha excel |
| 293 | informar quanto - ro - pagou - energia elétrica - roberto nascimento | 11 | 293_informar quanto_ro_pagou_energia elétrica |
| 294 | brasileira inteligência - desclassificados - documento anexo - abin - agência | 11 | 294_brasileira inteligência_desclassificados_documento anexo_abin |
| 295 | indígenas - campo - emergência - beneficiário - 2019 data | 11 | 295_indígenas_campo_emergência_beneficiário |
| 296 | medicina - sisu - modalidade - 2023 - quantas vagas | 11 | 296_medicina_sisu_modalidade_2023 |
| 297 | mpog - públicos federais - cadastrados - exercícios anteriores - devidos | 11 | 297_mpog_públicos federais_cadastrados_exercícios anteriores |
| 298 | infraero - deixo - 03 2018 - continua - cumprimentos | 11 | 298_infraero_deixo_03 2018_continua |
| 299 | aquisição - engenharia - vigor - cópia digitalizada - insumos | 11 | 299_aquisição_engenharia_vigor_cópia digitalizada |
| 300 | correios - nascimento silva - existem - mortes - 835 818 | 11 | 300_correios_nascimento silva_existem_mortes |
| 301 | relatório - anac - via mail - gru - encaminhar cópia | 11 | 301_relatório_anac_via mail_gru |
| 302 | ebserh - fundamentação legal - nº 04 - posse - julho 2021 | 11 | 302_ebserh_fundamentação legal_nº 04_posse |
| 303 | escola - naval - receitas - tenente - caminho | 11 | 303_escola_naval_receitas_tenente |
| 304 | classificação - grau sigilo - processo sei - envio documentos - decreto nº | 10 | 304_classificação_grau sigilo_processo sei_envio documentos |
| 305 | contabilidade - ativos - atualizado - superintendências - técnico administrativo | 10 | 305_contabilidade_ativos_atualizado_superintendências |
| 306 | marinha - cabo - copias - comandante - sociedade | 10 | 306_marinha_cabo_copias_comandante |
| 307 | ilmo - eleitoral - pauta - ata reunião - 020 | 10 | 307_ilmo_eleitoral_pauta_ata reunião |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.1
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.1.2
* Scikit-Learn: 1.3.2
* Sentence-transformers: 2.2.2
* Transformers: 4.35.0
* Numba: 0.58.1
* Plotly: 5.18.0
* Python: 3.10.13 | null | bertopic | text-classification | null | null | null | null | null | null | null | null | null | rodrigomoreirasilva/bertopic_lai_recursos_CGU | [
-0.7740206122398376,
-0.49392104148864746,
0.40655210614204407,
0.3451929986476898,
-0.4532264173030853,
0.13708850741386414,
0.14170095324516296,
-0.24023184180259705,
0.6529358625411987,
0.3033316731452942,
-0.6129136681556702,
-0.6309354305267334,
-0.6526185274124146,
0.2647341787815094... |
DAfromsky/Multi-Label-Classification-PubMed-Articles | DAfromsky | 2023-11-29T18:00:03Z | 6 | 0 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T18:00:03Z | 2023-11-29T16:00:35.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | DAfromsky/Multi-Label-Classification-PubMed-Articles | [
-0.32276463508605957,
-0.2256849706172943,
0.8622266054153442,
0.4346153736114502,
-0.5282987952232361,
0.7012974619865417,
0.7915722131729126,
0.07618652284145355,
0.7746030688285828,
0.2563217282295227,
-0.7852814793586731,
-0.22573867440223694,
-0.9104479551315308,
0.571567177772522,
... |
erolb/bart_test | erolb | 2023-11-29T16:56:50Z | 6 | 0 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:Mousumi/finetuned_bart",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T16:56:50Z | 2023-11-29T16:56:47.000Z | null | null | ---
base_model: Mousumi/finetuned_bart
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_test
This model is a fine-tuned version of [Mousumi/finetuned_bart](https://huggingface.co/Mousumi/finetuned_bart) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3363
- Bleu: 0.2728
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 313 | 1.3791 | 0.1806 | 20.0 |
| 1.6735 | 2.0 | 626 | 1.3363 | 0.2728 | 20.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | erolb/bart_test | [
-0.6174452304840088,
-0.9588227868080139,
0.21889632940292358,
0.2550455629825592,
-0.3629378080368042,
-0.3771335482597351,
-0.2204234004020691,
-0.24373821914196014,
0.2096989005804062,
0.4531290829181671,
-0.7532472610473633,
-0.5119777321815491,
-0.5991396307945251,
-0.0791577845811843... |
thelfer/test-RL-moonlander | thelfer | 2023-11-29T20:01:03Z | 6 | 0 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T20:01:03Z | 2023-11-29T19:24:26.000Z | null | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.12 +/- 18.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | thelfer/test-RL-moonlander | [
-0.0031747242901474237,
-0.3944118320941925,
0.24817679822444916,
0.3390541076660156,
-0.08787582069635391,
0.04007984697818756,
0.5000530481338501,
-0.1760784089565277,
0.28882232308387756,
0.9444825649261475,
-0.6269250512123108,
-0.5120341181755066,
-0.4980955719947815,
-0.2793834805488... |
yentinglin/Taiwan-LLaMa-v0.9 | yentinglin | 2023-11-29T06:01:03Z | 5 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:yentinglin/zh_TW_c4",
"dataset:yentinglin/traditional_mandarin_instructions",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T06:01:03Z | 2023-08-10T05:30:47.000Z | null | null | ---
license: llama2
datasets:
- yentinglin/zh_TW_c4
- yentinglin/traditional_mandarin_instructions
language:
- zh
widget:
- text: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:"
library_name: transformers
pipeline_tag: text-generation
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 13B v0.9 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 13B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLaMa-v1.0-base](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0-base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLaMa-v0.9", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Language Models for Taiwanese Culture},
year={2023},
url={https://github.com/MiuLab/Taiwan-LLaMa},
note={Code and models available at https://github.com/MiuLab/Taiwan-LLaMa},
}
```
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | yentinglin/Taiwan-LLaMa-v0.9 | [
-0.39330315589904785,
-0.9659410715103149,
0.327315092086792,
0.4937893748283386,
-0.4964509606361389,
0.10414669662714005,
-0.44251424074172974,
-0.5707306861877441,
0.46413636207580566,
0.38383620977401733,
-0.4680643677711487,
-0.6901674866676331,
-0.5529568195343018,
0.1389815658330917... |
mdosama39/xlm-roberta-base-FakeNews-Dravidian-NT | mdosama39 | 2023-11-29T13:42:04Z | 5 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-29T13:42:04Z | 2023-11-23T18:05:32.000Z | null | null | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-FakeNews-Dravidian-NT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-FakeNews-Dravidian-NT
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4065
- Accuracy: 0.8221
- Weighted f1 score: 0.8220
- Macro f1 score: 0.8220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 score | Macro f1 score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:--------------:|
| 0.8524 | 1.0 | 204 | 0.6614 | 0.6282 | 0.6093 | 0.6096 |
| 0.6323 | 2.0 | 408 | 0.4988 | 0.7767 | 0.7734 | 0.7733 |
| 0.5365 | 3.0 | 612 | 0.4496 | 0.7939 | 0.7930 | 0.7930 |
| 0.493 | 4.0 | 816 | 0.4303 | 0.8074 | 0.8063 | 0.8062 |
| 0.4644 | 5.0 | 1020 | 0.4150 | 0.8098 | 0.8096 | 0.8096 |
| 0.4397 | 6.0 | 1224 | 0.4065 | 0.8221 | 0.8217 | 0.8217 |
| 0.4251 | 7.0 | 1428 | 0.4063 | 0.8209 | 0.8205 | 0.8205 |
| 0.4224 | 8.0 | 1632 | 0.4058 | 0.8245 | 0.8241 | 0.8240 |
| 0.415 | 9.0 | 1836 | 0.4063 | 0.8245 | 0.8242 | 0.8242 |
| 0.4039 | 10.0 | 2040 | 0.4065 | 0.8221 | 0.8220 | 0.8220 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.14.1
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | mdosama39/xlm-roberta-base-FakeNews-Dravidian-NT | [
-0.5583587288856506,
-0.6563931107521057,
0.2367294728755951,
0.029267050325870514,
-0.23493774235248566,
-0.15686222910881042,
-0.03273026645183563,
-0.19929125905036926,
0.3592929244041443,
0.44428256154060364,
-0.8213578462600708,
-0.783229649066925,
-0.7857844233512878,
-0.103641055524... |
Jaspernl/whisper-small-student-ft-nl | Jaspernl | 2023-11-29T19:27:25Z | 5 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | 2023-11-29T19:27:25Z | 2023-11-28T10:45:42.000Z | null | null | Entry not found | null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Jaspernl/whisper-small-student-ft-nl | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
topeomole/mistral-fin | topeomole | 2023-11-29T12:09:18Z | 5 | 0 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T12:09:18Z | 2023-11-28T20:45:40.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | topeomole/mistral-fin | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
Miraitowa1829/raner_ecommerce | Miraitowa1829 | 2023-11-29T03:46:21Z | 5 | 0 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T03:46:21Z | 2023-11-29T03:00:50.000Z | null | null | ---
license: apache-2.0
---
| null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | Miraitowa1829/raner_ecommerce | [
-0.12853386998176575,
-0.18616794049739838,
0.6529127359390259,
0.4943622946739197,
-0.19319306313991547,
0.2360745519399643,
0.36072012782096863,
0.05056336894631386,
0.579365611076355,
0.740013837814331,
-0.6508102416992188,
-0.23784014582633972,
-0.7102251052856445,
-0.04782590642571449... |
hanchungshin/opt-6.7b-lora | hanchungshin | 2023-11-29T03:52:40Z | 5 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:facebook/opt-6.7b",
"region:us"
] | 2023-11-29T03:52:40Z | 2023-11-29T03:48:10.000Z | null | null | ---
library_name: peft
base_model: facebook/opt-6.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | hanchungshin/opt-6.7b-lora | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
Ash-Hun/WelSSiSKo-Chat | Ash-Hun | 2023-11-29T04:19:06Z | 5 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"region:us"
] | 2023-11-29T04:19:06Z | 2023-11-29T04:19:00.000Z | null | null | ---
library_name: peft
base_model: beomi/polyglot-ko-12.8b-safetensors
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | Ash-Hun/WelSSiSKo-Chat | [
-0.5779396295547485,
-0.5580515265464783,
0.40497368574142456,
0.08317576348781586,
-0.253414124250412,
-0.27545133233070374,
0.06068450212478638,
-0.5384040474891663,
0.04877224564552307,
0.6135933995246887,
-0.7259423136711121,
-0.6298723816871643,
-0.5585345029830933,
-0.079713866114616... |
mutisya/whisper-large-v3-sw-cv-11-v23.11.28 | mutisya | 2023-11-29T19:38:36Z | 5 | 0 | null | [
"peft",
"tensorboard",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"region:us"
] | 2023-11-29T19:38:36Z | 2023-11-29T06:05:13.000Z | null | null | ---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | mutisya/whisper-large-v3-sw-cv-11-v23.11.28 | [
-0.5748044848442078,
-0.559001624584198,
0.40296831727027893,
0.07961396127939224,
-0.25349289178848267,
-0.27700257301330566,
0.0604688934981823,
-0.5367453098297119,
0.04952647536993027,
0.6133860945701599,
-0.7236800193786621,
-0.627833366394043,
-0.5595568418502808,
-0.0856231749057769... |
roymgabriel/BioPharma | roymgabriel | 2023-11-29T07:49:48Z | 5 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/TinySapBERT-from-TinyPubMedBERT-v1.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T07:49:48Z | 2023-11-29T06:32:19.000Z | null | null | ---
base_model: dmis-lab/TinySapBERT-from-TinyPubMedBERT-v1.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: BioPharma
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioPharma
This model is a fine-tuned version of [dmis-lab/TinySapBERT-from-TinyPubMedBERT-v1.0](https://huggingface.co/dmis-lab/TinySapBERT-from-TinyPubMedBERT-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4073
- F1: 0.8642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | roymgabriel/BioPharma | [
-0.2607462704181671,
-0.4989706873893738,
0.4084590971469879,
-0.13215292990207672,
-0.336279958486557,
-0.2691322863101959,
-0.06143391504883766,
-0.09619822353124619,
0.24965760111808777,
0.3649883270263672,
-0.7445471286773682,
-0.4769435226917267,
-0.7468767166137695,
0.061075363308191... |
rika37/poca-SoccerTwos | rika37 | 2023-11-29T07:20:28Z | 5 | 0 | null | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | 2023-11-29T07:20:28Z | 2023-11-29T07:20:20.000Z | null | null | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rika37/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| null | ml-agents | reinforcement-learning | null | null | null | null | null | null | null | null | null | rika37/poca-SoccerTwos | [
-0.6720775961875916,
-0.6790478229522705,
0.20810623466968536,
0.15002183616161346,
-0.2125471532344818,
0.325041264295578,
0.14344240725040436,
-0.38320308923721313,
0.6785479784011841,
0.3021409213542938,
-0.8300206065177917,
-0.8170238137245178,
-0.36572131514549255,
-0.2477235794067382... |
Liberty-L/swag_pretrained | Liberty-L | 2023-11-29T07:47:03Z | 5 | 0 | null | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | 2023-11-29T07:47:03Z | 2023-11-29T07:46:23.000Z | null | null | Entry not found | null | transformers | multiple-choice | null | null | null | null | null | null | null | null | null | Liberty-L/swag_pretrained | [
-0.3227648138999939,
-0.22568483650684357,
0.8622256517410278,
0.43461519479751587,
-0.5282990336418152,
0.7012965679168701,
0.7915716767311096,
0.07618631422519684,
0.7746025323867798,
0.25632259249687195,
-0.7852814793586731,
-0.22573857009410858,
-0.910447895526886,
0.5715669393539429,
... |
HelixAI/codellama-8bit-json-prompt-new-prompt-1129-1500-no_chat_history_epoch_3 | HelixAI | 2023-11-29T09:00:35Z | 5 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T09:00:35Z | 2023-11-29T08:27:02.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | HelixAI/codellama-8bit-json-prompt-new-prompt-1129-1500-no_chat_history_epoch_3 | [
-0.3227648138999939,
-0.22568483650684357,
0.8622256517410278,
0.43461519479751587,
-0.5282990336418152,
0.7012965679168701,
0.7915716767311096,
0.07618631422519684,
0.7746025323867798,
0.25632259249687195,
-0.7852814793586731,
-0.22573857009410858,
-0.910447895526886,
0.5715669393539429,
... |
dawoz/videomae-base-finetuned-driver-action | dawoz | 2023-11-29T17:11:49Z | 5 | 0 | null | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:11:49Z | 2023-11-29T09:10:50.000Z | null | null | Entry not found | null | transformers | video-classification | null | null | null | null | null | null | null | null | null | dawoz/videomae-base-finetuned-driver-action | [
-0.3227648138999939,
-0.22568483650684357,
0.8622256517410278,
0.43461519479751587,
-0.5282990336418152,
0.7012965679168701,
0.7915716767311096,
0.07618631422519684,
0.7746025323867798,
0.25632259249687195,
-0.7852814793586731,
-0.22573857009410858,
-0.910447895526886,
0.5715669393539429,
... |
En-2863/opt-125m | En-2863 | 2023-11-29T10:03:24Z | 5 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T10:03:24Z | 2023-11-29T10:02:56.000Z | null | null | ---
tags:
- generated_from_trainer
model-index:
- name: opt-125m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2192 | 1.0 | 1184 | 3.1833 |
| 3.024 | 2.0 | 2368 | 3.1701 |
| 2.9101 | 3.0 | 3552 | 3.1709 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | En-2863/opt-125m | [
-0.4472428262233734,
-0.5417060256004333,
0.2874981164932251,
0.050810642540454865,
-0.4117540121078491,
-0.5451977849006653,
-0.018422160297632217,
-0.07369659096002579,
0.1358080357313156,
0.45131734013557434,
-0.8529940247535706,
-0.6889107823371887,
-0.5587742924690247,
-0.096198759973... |
xriminact/tars_v2_lesson_plan_1k | xriminact | 2023-11-29T10:53:07Z | 5 | 0 | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T10:53:07Z | 2023-11-29T10:46:16.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | xriminact/tars_v2_lesson_plan_1k | [
-0.3227650225162506,
-0.22568444907665253,
0.8622258901596069,
0.43461504578590393,
-0.5282988548278809,
0.7012965679168701,
0.7915717959403992,
0.0761863961815834,
0.7746025919914246,
0.2563222050666809,
-0.7852813005447388,
-0.22573848068714142,
-0.910447895526886,
0.5715667009353638,
... |
EvgeniaKomleva/roberta-large-finetuned-abbr-finetuned-ner | EvgeniaKomleva | 2023-11-29T16:46:22Z | 5 | 0 | null | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:plod-filtered",
"base_model:surrey-nlp/roberta-large-finetuned-abbr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T16:46:22Z | 2023-11-29T13:07:34.000Z | null | null | ---
license: mit
base_model: surrey-nlp/roberta-large-finetuned-abbr
tags:
- generated_from_trainer
datasets:
- plod-filtered
model-index:
- name: roberta-large-finetuned-abbr-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-abbr-finetuned-ner
This model is a fine-tuned version of [surrey-nlp/roberta-large-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) on the plod-filtered dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0988
- eval_precision: 0.9704
- eval_recall: 0.9689
- eval_f1: 0.9697
- eval_accuracy: 0.9665
- eval_runtime: 204.5482
- eval_samples_per_second: 118.016
- eval_steps_per_second: 29.504
- epoch: 2.72
- step: 76484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | token-classification | null | null | null | null | null | null | null | null | null | EvgeniaKomleva/roberta-large-finetuned-abbr-finetuned-ner | [
-0.5537661910057068,
-0.8839789628982544,
0.17182938754558563,
0.19058945775032043,
-0.4520288407802582,
-0.5426505208015442,
-0.4311613440513611,
-0.36864110827445984,
0.15597453713417053,
0.4568801820278168,
-0.6659648418426514,
-0.6375438570976257,
-0.6581913232803345,
0.029153127223253... |
xiaopch/vit-base-patch16-224-finetuned | xiaopch | 2023-11-29T14:26:13Z | 5 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T14:26:13Z | 2023-11-29T14:12:13.000Z | null | null | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9809264305177112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0786
- Accuracy: 0.9809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5888 | 1.0 | 26 | 0.1720 | 0.9700 |
| 0.1027 | 2.0 | 52 | 0.0786 | 0.9809 |
| 0.0809 | 3.0 | 78 | 0.0730 | 0.9809 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | image-classification | null | null | null | null | null | null | null | null | null | xiaopch/vit-base-patch16-224-finetuned | [
-0.4308100938796997,
-0.6706395149230957,
0.0266280435025692,
0.11614043265581131,
-0.42177948355674744,
-0.49037012457847595,
-0.11826643347740173,
-0.16255904734134674,
0.05557970330119133,
0.36175453662872314,
-0.753279447555542,
-0.6443222761154175,
-0.7091612815856934,
-0.245579287409... |
BojanaBas/Mistral-7B-Instruct-v0.1-pqa | BojanaBas | 2023-11-29T15:24:02Z | 5 | 0 | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:24:02Z | 2023-11-29T14:20:14.000Z | null | null | ---
license: agpl-3.0
---
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | BojanaBas/Mistral-7B-Instruct-v0.1-pqa | [
-0.12853401899337769,
-0.1861673891544342,
0.6529126763343811,
0.4943625330924988,
-0.19319301843643188,
0.23607464134693146,
0.3607196807861328,
0.05056333541870117,
0.5793654322624207,
0.740013837814331,
-0.6508100628852844,
-0.23783957958221436,
-0.7102248668670654,
-0.04782595857977867... |
pnikoulis/dqn-SpaceInvadersNoFrameskip-v4 | pnikoulis | 2023-11-29T15:03:35Z | 5 | 0 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T15:03:35Z | 2023-11-29T15:03:04.000Z | null | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 232.00 +/- 130.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pnikoulis -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pnikoulis -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pnikoulis
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | pnikoulis/dqn-SpaceInvadersNoFrameskip-v4 | [
-0.6189193725585938,
-0.5514317750930786,
0.28288692235946655,
0.355608731508255,
-0.1659119576215744,
-0.24718502163887024,
0.14372500777244568,
-0.18031561374664307,
0.17743748426437378,
0.31289929151535034,
-1.02152681350708,
-0.4915480315685272,
-0.3524612784385681,
-0.0515349060297012... |
GouldJayden/dqn-SpaceInvadersNoFrameskip-v4 | GouldJayden | 2023-11-29T15:51:50Z | 5 | 0 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T15:51:50Z | 2023-11-29T15:51:20.000Z | null | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 483.50 +/- 70.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga GouldJayden -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga GouldJayden -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga GouldJayden
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | GouldJayden/dqn-SpaceInvadersNoFrameskip-v4 | [
-0.6115051507949829,
-0.5522042512893677,
0.2883049547672272,
0.3662659823894501,
-0.14626669883728027,
-0.2555489242076874,
0.1476164013147354,
-0.18685027956962585,
0.17817707359790802,
0.31665104627609253,
-1.009177565574646,
-0.49054184556007385,
-0.35433393716812134,
-0.05227923020720... |
wesley7137/Guanaco-3B-Uncensored-v2-sharded | wesley7137 | 2023-11-29T16:41:28Z | 5 | 0 | null | [
"transformers",
"safetensors",
"gpt_neox",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T16:41:28Z | 2023-11-29T16:37:32.000Z | null | null | Entry not found | null | transformers | feature-extraction | null | null | null | null | null | null | null | null | null | wesley7137/Guanaco-3B-Uncensored-v2-sharded | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
RajuEEE/GeneratorModel_SFT_GPT2Large_SmallerQuestion | RajuEEE | 2023-11-29T19:18:46Z | 5 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:gpt2-large",
"region:us"
] | 2023-11-29T19:18:46Z | 2023-11-29T19:18:39.000Z | null | null | ---
library_name: peft
base_model: gpt2-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0 | null | peft | null | null | null | null | null | null | null | null | null | null | RajuEEE/GeneratorModel_SFT_GPT2Large_SmallerQuestion | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
ukr-models/lb-1 | ukr-models | 2023-11-29T20:17:02Z | 5 | 0 | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | 2023-11-29T20:17:02Z | 2023-11-29T20:15:59.000Z | null | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 113 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 113,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | null | sentence-transformers | sentence-similarity | null | null | null | null | null | null | null | null | null | ukr-models/lb-1 | [
-0.25452929735183716,
-0.8519259691238403,
0.3290368318557739,
0.3326404392719269,
-0.26051825284957886,
-0.4348751902580261,
-0.24282489717006683,
-0.002975932089611888,
0.2143375724554062,
0.3701936900615692,
-0.6482661366462708,
-0.6470916867256165,
-0.6898119449615479,
-0.0279571823775... |
User1115/whisper-large-v2-test-singleWord-small-10steps | User1115 | 2023-11-29T20:46:55Z | 5 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"region:us"
] | 2023-11-29T20:46:55Z | 2023-11-29T20:46:47.000Z | null | null | ---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0 | null | peft | null | null | null | null | null | null | null | null | null | null | User1115/whisper-large-v2-test-singleWord-small-10steps | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
mamedu2016/llama-2-7b-miniguanaco | mamedu2016 | 2023-11-29T08:41:38Z | 4 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T08:41:38Z | 2023-07-29T17:46:46.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | mamedu2016/llama-2-7b-miniguanaco | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
youssed/llm-hub | youssed | 2023-11-29T20:36:50Z | 4 | 0 | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T20:36:50Z | 2023-10-24T14:13:24.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | youssed/llm-hub | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
JBZhang2342/speecht5_tts | JBZhang2342 | 2023-11-29T18:29:47Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"en_accent,mozilla,t5,common_voice_1_0",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_1_0",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-29T18:29:47Z | 2023-11-10T22:43:04.000Z | null | null | ---
language:
- en
license: mit
base_model: microsoft/speecht5_tts
tags:
- en_accent,mozilla,t5,common_voice_1_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_1_0
model-index:
- name: SpeechT5 TTS English Accented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS English Accented
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Common Voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.01 | 250 | 0.6600 |
| 0.8131 | 0.01 | 500 | 0.6203 |
| 0.8131 | 0.02 | 750 | 0.5819 |
| 0.6493 | 0.03 | 1000 | 0.5436 |
| 0.6493 | 0.03 | 1250 | 0.5372 |
| 0.5998 | 0.04 | 1500 | 0.5411 |
| 0.5998 | 0.04 | 1750 | 0.5351 |
| 0.585 | 0.05 | 2000 | 0.5260 |
| 0.585 | 0.06 | 2250 | 0.5254 |
| 0.5778 | 0.06 | 2500 | 0.5217 |
| 0.5778 | 0.07 | 2750 | 0.5229 |
| 0.5667 | 0.07 | 3000 | 0.5115 |
| 0.5667 | 0.08 | 3250 | 0.5143 |
| 0.5692 | 0.09 | 3500 | 0.5143 |
| 0.5692 | 0.09 | 3750 | 0.5130 |
| 0.5607 | 0.1 | 4000 | 0.5082 |
| 0.5607 | 0.11 | 4250 | 0.5141 |
| 0.5601 | 0.11 | 4500 | 0.5103 |
| 0.5601 | 0.12 | 4750 | 0.5065 |
| 0.5569 | 0.12 | 5000 | 0.5048 |
| 0.5569 | 0.13 | 5250 | 0.5018 |
| 0.5552 | 0.14 | 5500 | 0.5006 |
| 0.5552 | 0.14 | 5750 | 0.5074 |
| 0.5548 | 0.15 | 6000 | 0.5045 |
| 0.5548 | 0.16 | 6250 | 0.5021 |
| 0.5563 | 0.16 | 6500 | 0.4996 |
| 0.5563 | 0.17 | 6750 | 0.4982 |
| 0.5527 | 0.17 | 7000 | 0.4982 |
| 0.5527 | 0.18 | 7250 | 0.4996 |
| 0.5448 | 0.19 | 7500 | 0.5005 |
| 0.5448 | 0.19 | 7750 | 0.4988 |
| 0.542 | 0.2 | 8000 | 0.4973 |
| 0.542 | 0.21 | 8250 | 0.4969 |
| 0.5491 | 0.21 | 8500 | 0.4985 |
| 0.5491 | 0.22 | 8750 | 0.4953 |
| 0.5392 | 0.23 | 9000 | 0.4956 |
| 0.5392 | 0.23 | 9250 | 0.4988 |
| 0.546 | 0.24 | 9500 | 0.4982 |
| 0.546 | 0.24 | 9750 | 0.4968 |
| 0.5396 | 0.25 | 10000 | 0.4937 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| null | transformers | text-to-audio | null | null | null | null | null | null | null | null | null | JBZhang2342/speecht5_tts | [
-0.6141244769096375,
-0.5739700794219971,
0.09131790697574615,
0.10779629647731781,
-0.0936756357550621,
0.0515502505004406,
-0.0006025308975949883,
-0.15767578780651093,
0.5897188186645508,
0.3731901943683624,
-0.7796216607093811,
-0.8322483897209167,
-0.6848599910736084,
-0.1418609321117... |
phqlong/vietcuna-7b-v3-qlora-gpt4augmented-combined-v2 | phqlong | 2023-11-29T07:52:30Z | 4 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:vilm/vietcuna-7b-v3",
"region:us"
] | 2023-11-29T07:52:30Z | 2023-11-14T08:17:41.000Z | null | null | ---
library_name: peft
base_model: vilm/vietcuna-7b-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | phqlong/vietcuna-7b-v3-qlora-gpt4augmented-combined-v2 | [
-0.5767228603363037,
-0.5574303269386292,
0.403776079416275,
0.08205638080835342,
-0.2533367872238159,
-0.2763859033584595,
0.06185923516750336,
-0.5377660393714905,
0.051452722400426865,
0.6138508319854736,
-0.7261555194854736,
-0.6295592188835144,
-0.5589078664779663,
-0.0806203186511993... |
pranjal01/fine_tuned_gpt2_clm-model | pranjal01 | 2023-11-29T08:39:09Z | 4 | 0 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"en",
"dataset:eli5",
"base_model:gpt2",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T08:39:09Z | 2023-11-22T07:04:36.000Z | null | null | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: fine_tuned_gpt2_clm-model
results: []
datasets:
- eli5
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_gpt2_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3066
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 142 | 3.3422 |
| No log | 2.0 | 284 | 3.3226 |
| No log | 3.0 | 426 | 3.3148 |
| 3.4352 | 4.0 | 568 | 3.3095 |
| 3.4352 | 5.0 | 710 | 3.3074 |
| 3.4352 | 6.0 | 852 | 3.3066 |
| 3.4352 | 7.0 | 994 | 3.3046 |
| 3.3068 | 8.0 | 1136 | 3.3049 |
| 3.3068 | 9.0 | 1278 | 3.3048 |
| 3.3068 | 10.0 | 1420 | 3.3050 |
| 3.2433 | 11.0 | 1562 | 3.3062 |
| 3.2433 | 12.0 | 1704 | 3.3059 |
| 3.2433 | 13.0 | 1846 | 3.3062 |
| 3.2433 | 14.0 | 1988 | 3.3065 |
| 3.2113 | 15.0 | 2130 | 3.3066 |
### Inference:
- prompt = "dna phosphorylation is the process of"
- generated Text: dna phosphorylation is the process of forming the deoxygenated product. For example, in a protein phosphorylation inhibitor, it occurs to deoxygenate the phosphorylated protein by binding a phosphate molecule and preventing it from being destroyed by a nonenzymatic process.
In a phosphorylation inhibitor like dna, the product is phosphorylated by the phosphocreatine, a phosphorylated phosphocreatine molecule that can bind to other phosphocreatine molecules that bind to phosphocreatine. This interaction helps to separate the phosphocreatine molecule that is phosphorylated from the phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-glucose molecule that is phosphocreatine-phosphocreatine-glucose-phosphocreatine-phosphocreatine-glucose.
In anoxidase inhibitors like dna, they are a bit more specific, more specific, and have a more complicated interaction with the phosphocreatine molecule that can bind to phosphocreatine molecules.
I would argue that both dna-and phosphocreatine-phosphocreatine-glucose will not be able to bind to phosphocreatine because the phosphocreatine-phosphocreatine-phosphocreatine-glucose-phosphocreatine molecule that was phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-glucose, is phosphocreatine.
That is, dna-and phosphocreatine-glucose will be able to bind to phosphocreatine because the phosphocreatine molecule that was phosphocreatine-glucose will not be phosphocreatine because the phosphocreatine-phosphocreatine-glucose molecule that was phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine-glucose, is phosphocreatine.
Edit: Added: The final point is that it can't bind phosphocreatine because that phosphocreatine molecule (a phosphocreatine-phosphocreatine-phosphocreatine-phosphocreatine molecule) can not be phosphoc
### Evaluation metric:
Perplexity: 27.29
### GPU:
- CUDA Version: 12.1
- 4x Tesla T4
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | pranjal01/fine_tuned_gpt2_clm-model | [
-0.42180901765823364,
-0.7413365840911865,
0.5670709013938904,
-0.21192890405654907,
-0.2473188042640686,
0.04258708283305168,
0.11293305456638336,
-0.19492369890213013,
0.008632907643914223,
0.09097157418727875,
-0.5567428469657898,
-0.30489546060562134,
-0.8391161561012268,
0.03205564990... |
finiteautomata/roberta-base-bne-reranker | finiteautomata | 2023-11-29T14:09:30Z | 4 | 0 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T14:09:30Z | 2023-11-23T16:34:19.000Z | null | null | ---
{}
---
# Reranker with roberta
## Metrics
| Metric | Value |
| ------ | ----- |
| MRR | 0.680 |
| MRR Grouped | 0.723 |
| Accuracy | 0.608 |
| Accuracy Grouped | 0.656 | | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | finiteautomata/roberta-base-bne-reranker | [
0.27848705649375916,
-0.3747776448726654,
0.23210997879505157,
0.4209558367729187,
-0.2575225234031677,
0.27309784293174744,
-0.057767901569604874,
0.05318247899413109,
0.8043569922447205,
-0.10131067782640457,
-0.19232186675071716,
-0.8626659512519836,
-1.2346445322036743,
0.0593600124120... |
MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33 | MoritzLaurer | 2023-11-29T19:21:06Z | 4 | 0 | null | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-29T19:21:06Z | 2023-11-23T22:22:02.000Z | null | null | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
pipeline_tag: zero-shot-classification
library_name: transformers
license: mit
---
# Model description: deberta-v3-base-zeroshot-v1.1-all-33
The model is designed for zero-shot classification with the Hugging Face pipeline.
The model can do one universal classification task: determine whether a hypothesis is "true" or "not true" given a text
(`entailment` vs. `not_entailment`).
This task format is based on the Natural Language Inference task (NLI).
The task is so universal that any classification task can be reformulated into this task.
A detailed description of how the model was trained and how it can be used is available in this paper: [link to be added]
## Training data
The model was trained on a mixture of __33 datasets and 387 classes__ that have been reformatted into this universal format.
1. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling"
2. 28 classification tasks reformatted into the universal NLI format. ~51k cleaned texts were used to avoid overfitting:
'amazonpolarity', 'imdb', 'appreviews', 'yelpreviews', 'rottentomatoes',
'emotiondair', 'emocontext', 'empathetic',
'financialphrasebank', 'banking77', 'massive',
'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate',
'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent',
'agnews', 'yahootopics',
'trueteacher', 'spam', 'wellformedquery',
'manifesto', 'capsotu'.
See details on each dataset here: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv
Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`)
as opposed to three classes (entailment/neutral/contradiction)
The model was only trained on English data. For __multilingual use-cases__,
I recommend machine translating texts to English with libraries like [EasyNMT](https://github.com/UKPLab/EasyNMT).
English-only models tend to perform better than multilingual models and
validation with English data can be easier if you don't speak all languages in your corpus.
### How to use the model
#### Simple zero-shot classification pipeline
```python
#!pip install transformers[sentencepiece]
from transformers import pipeline
text = "Angela Merkel is a politician in Germany and leader of the CDU"
hypothesis_template = "This example is about {}"
classes_verbalized = ["politics", "economy", "entertainment", "environment"]
zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33")
output = zeroshot_classifier(text, classes_verbalised, hypothesis_template=hypothesis_template, multi_label=False)
print(output)
```
### Details on data and training
The code for preparing the data and training & evaluating the model is fully open-source here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main
Hyperparameters and other details are available in this Weights & Biases repo: https://wandb.ai/moritzlaurer/deberta-v3-base-zeroshot-v1-1-all-33/table?workspace=user-
## Metrics
Balanced accuracy is reported for all datasets.
`deberta-v3-base-zeroshot-v1.1-all-33` was trained on all datasets, with only maximum 500 texts per class to avoid overfitting.
The metrics on these datasets are therefore not strictly zeroshot, as the model has seen some data for each task during training.
`deberta-v3-base-zeroshot-v1.1-heldout` indicates zeroshot performance on the respective dataset.
To calculate these zeroshot metrics, the pipeline was run 28 times, each time with one dataset held out from training to simulate a zeroshot setup.

| | deberta-v3-base-mnli-fever-anli-ling-wanli-binary | deberta-v3-base-zeroshot-v1.1-heldout | deberta-v3-base-zeroshot-v1.1-all-33 |
|:---------------------------|---------------------------:|----------------------------------------:|---------------------------------------:|
| datasets mean (w/o nli) | 62 | 70.7 | 84 |
| amazonpolarity (2) | 91.7 | 95.7 | 96 |
| imdb (2) | 87.3 | 93.6 | 94.5 |
| appreviews (2) | 91.3 | 92.2 | 94.4 |
| yelpreviews (2) | 95.1 | 97.4 | 98.3 |
| rottentomatoes (2) | 83 | 88.7 | 90.8 |
| emotiondair (6) | 46.5 | 42.6 | 74.5 |
| emocontext (4) | 58.5 | 57.4 | 81.2 |
| empathetic (32) | 31.3 | 37.3 | 52.7 |
| financialphrasebank (3) | 78.3 | 68.9 | 91.2 |
| banking77 (72) | 18.9 | 46 | 73.7 |
| massive (59) | 44 | 56.6 | 78.9 |
| wikitoxic_toxicaggreg (2) | 73.7 | 82.5 | 90.5 |
| wikitoxic_obscene (2) | 77.3 | 91.6 | 92.6 |
| wikitoxic_threat (2) | 83.5 | 95.2 | 96.7 |
| wikitoxic_insult (2) | 79.6 | 91 | 91.6 |
| wikitoxic_identityhate (2) | 83.9 | 88 | 94.4 |
| hateoffensive (3) | 55.2 | 66.1 | 86 |
| hatexplain (3) | 44.1 | 57.6 | 76.9 |
| biasframes_offensive (2) | 56.8 | 85.4 | 87 |
| biasframes_sex (2) | 85.4 | 87 | 91.8 |
| biasframes_intent (2) | 56.3 | 85.2 | 87.8 |
| agnews (4) | 77.3 | 80 | 90.5 |
| yahootopics (10) | 53.6 | 57.7 | 72.8 |
| trueteacher (2) | 51.4 | 49.5 | 82.4 |
| spam (2) | 51.8 | 50 | 97.2 |
| wellformedquery (2) | 49.9 | 52.5 | 77.2 |
| manifesto (56) | 5.8 | 18.9 | 39.1 |
| capsotu (21) | 25.2 | 64 | 72.5 |
| mnli_m (2) | 92.4 | nan | 92.7 |
| mnli_mm (2) | 92.4 | nan | 92.5 |
| fevernli (2) | 89 | nan | 89.1 |
| anli_r1 (2) | 79.4 | nan | 80 |
| anli_r2 (2) | 68.4 | nan | 68.4 |
| anli_r3 (2) | 66.2 | nan | 68 |
| wanli (2) | 81.6 | nan | 81.8 |
| lingnli (2) | 88.4 | nan | 88.4 |
## Limitations and bias
The model can only do text classification tasks.
Please consult the original DeBERTa paper and the papers for the different datasets for potential biases.
## License
The base model (DeBERTa-v3) is published under the MIT license.
The datasets the model was fine-tuned on are published under a diverse set of licenses.
The following table provides an overview of the non-NLI datasets used for fine-tuning,
information on licenses, the underlying papers etc.: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv
## Citation
If you use this model academically, please cite:
```
@article{laurer_less_2023,
title = {Less {Annotating}, {More} {Classifying}: {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT}-{NLI}},
issn = {1047-1987, 1476-4989},
shorttitle = {Less {Annotating}, {More} {Classifying}},
url = {https://www.cambridge.org/core/product/identifier/S1047198723000207/type/journal_article},
doi = {10.1017/pan.2023.20},
language = {en},
urldate = {2023-06-20},
journal = {Political Analysis},
author = {Laurer, Moritz and Van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper},
month = jun,
year = {2023},
pages = {1--33},
}
```
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers can have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
### Hypotheses used for classification
The hypotheses in the tables below were used to fine-tune the model.
Inspecting them can help users get a feeling for which type of hypotheses and tasks the model was trained on.
You can formulate your own hypotheses by changing the `hypothesis_template` of the zeroshot pipeline. For example:
```python
from transformers import pipeline
text = "Angela Merkel is a politician in Germany and leader of the CDU"
hypothesis_template = "Merkel is the leader of the party: {}"
classes_verbalized = ["CDU", "SPD", "Greens"]
zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33")
output = zeroshot_classifier(text, classes_verbalised, hypothesis_template=hypothesis_template, multi_label=False)
print(output)
```
Note that a few rows in the `massive` and `banking77` datasets contain `nan` because some classes were so ambiguous/unclear that I excluded them from the data.
#### wellformedquery
| label | hypothesis |
|:----------------|:-----------------------------------------------|
| not_well_formed | This example is not a well formed Google query |
| well_formed | This example is a well formed Google query. |
#### biasframes_sex
| label | hypothesis |
|:--------|:-----------------------------------------------------------|
| not_sex | This example does not contain allusions to sexual content. |
| sex | This example contains allusions to sexual content. |
#### biasframes_intent
| label | hypothesis |
|:-----------|:-----------------------------------------------------------------|
| intent | The intent of this example is to be offensive/disrespectful. |
| not_intent | The intent of this example is not to be offensive/disrespectful. |
#### biasframes_offensive
| label | hypothesis |
|:--------------|:-------------------------------------------------------------------------|
| not_offensive | This example could not be considered offensive, disrespectful, or toxic. |
| offensive | This example could be considered offensive, disrespectful, or toxic. |
#### financialphrasebank
| label | hypothesis |
|:---------|:--------------------------------------------------------------------------|
| negative | The sentiment in this example is negative from an investor's perspective. |
| neutral | The sentiment in this example is neutral from an investor's perspective. |
| positive | The sentiment in this example is positive from an investor's perspective. |
#### rottentomatoes
| label | hypothesis |
|:---------|:-----------------------------------------------------------------------|
| negative | The sentiment in this example rotten tomatoes movie review is negative |
| positive | The sentiment in this example rotten tomatoes movie review is positive |
#### amazonpolarity
| label | hypothesis |
|:---------|:----------------------------------------------------------------|
| negative | The sentiment in this example amazon product review is negative |
| positive | The sentiment in this example amazon product review is positive |
#### imdb
| label | hypothesis |
|:---------|:------------------------------------------------------------|
| negative | The sentiment in this example imdb movie review is negative |
| positive | The sentiment in this example imdb movie review is positive |
#### appreviews
| label | hypothesis |
|:---------|:------------------------------------------------------|
| negative | The sentiment in this example app review is negative. |
| positive | The sentiment in this example app review is positive. |
#### yelpreviews
| label | hypothesis |
|:---------|:-------------------------------------------------------|
| negative | The sentiment in this example yelp review is negative. |
| positive | The sentiment in this example yelp review is positive. |
#### wikitoxic_toxicaggregated
| label | hypothesis |
|:--------------------|:----------------------------------------------------------------|
| not_toxicaggregated | This example wikipedia comment does not contain toxic language. |
| toxicaggregated | This example wikipedia comment contains toxic language. |
#### wikitoxic_obscene
| label | hypothesis |
|:------------|:------------------------------------------------------------------|
| not_obscene | This example wikipedia comment does not contain obscene language. |
| obscene | This example wikipedia comment contains obscene language. |
#### wikitoxic_threat
| label | hypothesis |
|:-----------|:----------------------------------------------------------|
| not_threat | This example wikipedia comment does not contain a threat. |
| threat | This example wikipedia comment contains a threat. |
#### wikitoxic_insult
| label | hypothesis |
|:-----------|:-----------------------------------------------------------|
| insult | This example wikipedia comment contains an insult. |
| not_insult | This example wikipedia comment does not contain an insult. |
#### wikitoxic_identityhate
| label | hypothesis |
|:-----------------|:---------------------------------------------------------------|
| identityhate | This example wikipedia comment contains identity hate. |
| not_identityhate | This example wikipedia comment does not contain identity hate. |
#### hateoffensive
| label | hypothesis |
|:------------|:------------------------------------------------------------------------|
| hate_speech | This example tweet contains hate speech. |
| neither | This example tweet contains neither offensive language nor hate speech. |
| offensive | This example tweet contains offensive language without hate speech. |
#### hatexplain
| label | hypothesis |
|:------------|:-------------------------------------------------------------------------------------------|
| hate_speech | This example text from twitter or gab contains hate speech. |
| neither | This example text from twitter or gab contains neither offensive language nor hate speech. |
| offensive | This example text from twitter or gab contains offensive language without hate speech. |
#### spam
| label | hypothesis |
|:---------|:------------------------------|
| not_spam | This example sms is not spam. |
| spam | This example sms is spam. |
#### emotiondair
| label | hypothesis |
|:---------|:---------------------------------------------------|
| anger | This example tweet expresses the emotion: anger |
| fear | This example tweet expresses the emotion: fear |
| joy | This example tweet expresses the emotion: joy |
| love | This example tweet expresses the emotion: love |
| sadness | This example tweet expresses the emotion: sadness |
| surprise | This example tweet expresses the emotion: surprise |
#### emocontext
| label | hypothesis |
|:--------|:--------------------------------------------------------------------------------------|
| angry | This example tweet expresses the emotion: anger |
| happy | This example tweet expresses the emotion: happiness |
| others | This example tweet does not express any of the emotions: anger, sadness, or happiness |
| sad | This example tweet expresses the emotion: sadness |
#### empathetic
| label | hypothesis |
|:-------------|:-----------------------------------------------------------|
| afraid | The main emotion of this example dialogue is: afraid |
| angry | The main emotion of this example dialogue is: angry |
| annoyed | The main emotion of this example dialogue is: annoyed |
| anticipating | The main emotion of this example dialogue is: anticipating |
| anxious | The main emotion of this example dialogue is: anxious |
| apprehensive | The main emotion of this example dialogue is: apprehensive |
| ashamed | The main emotion of this example dialogue is: ashamed |
| caring | The main emotion of this example dialogue is: caring |
| confident | The main emotion of this example dialogue is: confident |
| content | The main emotion of this example dialogue is: content |
| devastated | The main emotion of this example dialogue is: devastated |
| disappointed | The main emotion of this example dialogue is: disappointed |
| disgusted | The main emotion of this example dialogue is: disgusted |
| embarrassed | The main emotion of this example dialogue is: embarrassed |
| excited | The main emotion of this example dialogue is: excited |
| faithful | The main emotion of this example dialogue is: faithful |
| furious | The main emotion of this example dialogue is: furious |
| grateful | The main emotion of this example dialogue is: grateful |
| guilty | The main emotion of this example dialogue is: guilty |
| hopeful | The main emotion of this example dialogue is: hopeful |
| impressed | The main emotion of this example dialogue is: impressed |
| jealous | The main emotion of this example dialogue is: jealous |
| joyful | The main emotion of this example dialogue is: joyful |
| lonely | The main emotion of this example dialogue is: lonely |
| nostalgic | The main emotion of this example dialogue is: nostalgic |
| prepared | The main emotion of this example dialogue is: prepared |
| proud | The main emotion of this example dialogue is: proud |
| sad | The main emotion of this example dialogue is: sad |
| sentimental | The main emotion of this example dialogue is: sentimental |
| surprised | The main emotion of this example dialogue is: surprised |
| terrified | The main emotion of this example dialogue is: terrified |
| trusting | The main emotion of this example dialogue is: trusting |
#### agnews
| label | hypothesis |
|:---------|:-------------------------------------------------------|
| Business | This example news text is about business news |
| Sci/Tech | This example news text is about science and technology |
| Sports | This example news text is about sports |
| World | This example news text is about world news |
#### yahootopics
| label | hypothesis |
|:-----------------------|:---------------------------------------------------------------------------------------------------|
| Business & Finance | This example question from the Yahoo Q&A forum is categorized in the topic: Business & Finance |
| Computers & Internet | This example question from the Yahoo Q&A forum is categorized in the topic: Computers & Internet |
| Education & Reference | This example question from the Yahoo Q&A forum is categorized in the topic: Education & Reference |
| Entertainment & Music | This example question from the Yahoo Q&A forum is categorized in the topic: Entertainment & Music |
| Family & Relationships | This example question from the Yahoo Q&A forum is categorized in the topic: Family & Relationships |
| Health | This example question from the Yahoo Q&A forum is categorized in the topic: Health |
| Politics & Government | This example question from the Yahoo Q&A forum is categorized in the topic: Politics & Government |
| Science & Mathematics | This example question from the Yahoo Q&A forum is categorized in the topic: Science & Mathematics |
| Society & Culture | This example question from the Yahoo Q&A forum is categorized in the topic: Society & Culture |
| Sports | This example question from the Yahoo Q&A forum is categorized in the topic: Sports |
#### massive
| label | hypothesis |
|:-------------------------|:------------------------------------------------------------------------------------------|
| alarm_query | The example utterance is a query about alarms. |
| alarm_remove | The intent of this example utterance is to remove an alarm. |
| alarm_set | The intent of the example utterance is to set an alarm. |
| audio_volume_down | The intent of the example utterance is to lower the volume. |
| audio_volume_mute | The intent of this example utterance is to mute the volume. |
| audio_volume_other | The example utterance is related to audio volume. |
| audio_volume_up | The intent of this example utterance is turning the audio volume up. |
| calendar_query | The example utterance is a query about a calendar. |
| calendar_remove | The intent of the example utterance is to remove something from a calendar. |
| calendar_set | The intent of this example utterance is to set something in a calendar. |
| cooking_query | The example utterance is a query about cooking. |
| cooking_recipe | This example utterance is about cooking recipies. |
| datetime_convert | The example utterance is related to date time changes or conversion. |
| datetime_query | The intent of this example utterance is a datetime query. |
| email_addcontact | The intent of this example utterance is adding an email address to contacts. |
| email_query | The example utterance is a query about emails. |
| email_querycontact | The intent of this example utterance is to query contact details. |
| email_sendemail | The intent of the example utterance is to send an email. |
| general_greet | This example utterance is a general greet. |
| general_joke | The intent of the example utterance is to hear a joke. |
| general_quirky | nan |
| iot_cleaning | The intent of the example utterance is for an IoT device to start cleaning. |
| iot_coffee | The intent of this example utterance is for an IoT device to make coffee. |
| iot_hue_lightchange | The intent of this example utterance is changing the light. |
| iot_hue_lightdim | The intent of the example utterance is to dim the lights. |
| iot_hue_lightoff | The example utterance is related to turning the lights off. |
| iot_hue_lighton | The example utterance is related to turning the lights on. |
| iot_hue_lightup | The intent of this example utterance is to brighten lights. |
| iot_wemo_off | The intent of this example utterance is turning an IoT device off. |
| iot_wemo_on | The intent of the example utterance is to turn an IoT device on. |
| lists_createoradd | The example utterance is related to creating or adding to lists. |
| lists_query | The example utterance is a query about a list. |
| lists_remove | The intent of this example utterance is to remove a list or remove something from a list. |
| music_dislikeness | The intent of this example utterance is signalling music dislike. |
| music_likeness | The example utterance is related to liking music. |
| music_query | The example utterance is a query about music. |
| music_settings | The intent of the example utterance is to change music settings. |
| news_query | The example utterance is a query about the news. |
| play_audiobook | The example utterance is related to playing audiobooks. |
| play_game | The intent of this example utterance is to start playing a game. |
| play_music | The intent of this example utterance is for an IoT device to play music. |
| play_podcasts | The example utterance is related to playing podcasts. |
| play_radio | The intent of the example utterance is to play something on the radio. |
| qa_currency | This example utteranceis about currencies. |
| qa_definition | The example utterance is a query about a definition. |
| qa_factoid | The example utterance is a factoid question. |
| qa_maths | The example utterance is a question about maths. |
| qa_stock | This example utterance is about stocks. |
| recommendation_events | This example utterance is about event recommendations. |
| recommendation_locations | The intent of this example utterance is receiving recommendations for good locations. |
| recommendation_movies | This example utterance is about movie recommendations. |
| social_post | The example utterance is about social media posts. |
| social_query | The example utterance is a query about a social network. |
| takeaway_order | The intent of this example utterance is to order takeaway food. |
| takeaway_query | This example utterance is about takeaway food. |
| transport_query | The example utterance is a query about transport or travels. |
| transport_taxi | The intent of this example utterance is to get a taxi. |
| transport_ticket | This example utterance is about transport tickets. |
| transport_traffic | This example utterance is about transport or traffic. |
| weather_query | This example utterance is a query about the wheather. |
#### banking77
| label | hypothesis |
|:-------------------------------------------------|:----------------------------------------------------------------------------------------------------------|
| Refund_not_showing_up | This customer example message is about a refund not showing up. |
| activate_my_card | This banking customer example message is about activating a card. |
| age_limit | This banking customer example message is related to age limits. |
| apple_pay_or_google_pay | This banking customer example message is about apple pay or google pay |
| atm_support | This banking customer example message requests ATM support. |
| automatic_top_up | This banking customer example message is about automatic top up. |
| balance_not_updated_after_bank_transfer | This banking customer example message is about a balance not updated after a transfer. |
| balance_not_updated_after_cheque_or_cash_deposit | This banking customer example message is about a balance not updated after a cheque or cash deposit. |
| beneficiary_not_allowed | This banking customer example message is related to a beneficiary not being allowed or a failed transfer. |
| cancel_transfer | This banking customer example message is related to the cancellation of a transfer. |
| card_about_to_expire | This banking customer example message is related to the expiration of a card. |
| card_acceptance | This banking customer example message is related to the scope of acceptance of a card. |
| card_arrival | This banking customer example message is about the arrival of a card. |
| card_delivery_estimate | This banking customer example message is about a card delivery estimate or timing. |
| card_linking | nan |
| card_not_working | This banking customer example message is about a card not working. |
| card_payment_fee_charged | This banking customer example message is about a card payment fee. |
| card_payment_not_recognised | This banking customer example message is about a payment the customer does not recognise. |
| card_payment_wrong_exchange_rate | This banking customer example message is about a wrong exchange rate. |
| card_swallowed | This banking customer example message is about a card swallowed by a machine. |
| cash_withdrawal_charge | This banking customer example message is about a cash withdrawal charge. |
| cash_withdrawal_not_recognised | This banking customer example message is about an unrecognised cash withdrawal. |
| change_pin | This banking customer example message is about changing a pin code. |
| compromised_card | This banking customer example message is about a compromised card. |
| contactless_not_working | This banking customer example message is about contactless not working |
| country_support | This banking customer example message is about country-specific support. |
| declined_card_payment | This banking customer example message is about a declined card payment. |
| declined_cash_withdrawal | This banking customer example message is about a declined cash withdrawal. |
| declined_transfer | This banking customer example message is about a declined transfer. |
| direct_debit_payment_not_recognised | This banking customer example message is about an unrecognised direct debit payment. |
| disposable_card_limits | This banking customer example message is about the limits of disposable cards. |
| edit_personal_details | This banking customer example message is about editing personal details. |
| exchange_charge | This banking customer example message is about exchange rate charges. |
| exchange_rate | This banking customer example message is about exchange rates. |
| exchange_via_app | nan |
| extra_charge_on_statement | This banking customer example message is about an extra charge. |
| failed_transfer | This banking customer example message is about a failed transfer. |
| fiat_currency_support | This banking customer example message is about fiat currency support |
| get_disposable_virtual_card | This banking customer example message is about getting a disposable virtual card. |
| get_physical_card | nan |
| getting_spare_card | This banking customer example message is about getting a spare card. |
| getting_virtual_card | This banking customer example message is about getting a virtual card. |
| lost_or_stolen_card | This banking customer example message is about a lost or stolen card. |
| lost_or_stolen_phone | This banking customer example message is about a lost or stolen phone. |
| order_physical_card | This banking customer example message is about ordering a card. |
| passcode_forgotten | This banking customer example message is about a forgotten passcode. |
| pending_card_payment | This banking customer example message is about a pending card payment. |
| pending_cash_withdrawal | This banking customer example message is about a pending cash withdrawal. |
| pending_top_up | This banking customer example message is about a pending top up. |
| pending_transfer | This banking customer example message is about a pending transfer. |
| pin_blocked | This banking customer example message is about a blocked pin. |
| receiving_money | This banking customer example message is about receiving money. |
| request_refund | This banking customer example message is about a refund request. |
| reverted_card_payment? | This banking customer example message is about reverting a card payment. |
| supported_cards_and_currencies | nan |
| terminate_account | This banking customer example message is about terminating an account. |
| top_up_by_bank_transfer_charge | nan |
| top_up_by_card_charge | This banking customer example message is about the charge for topping up by card. |
| top_up_by_cash_or_cheque | This banking customer example message is about topping up by cash or cheque. |
| top_up_failed | This banking customer example message is about top up issues or failures. |
| top_up_limits | This banking customer example message is about top up limitations. |
| top_up_reverted | This banking customer example message is about issues with topping up. |
| topping_up_by_card | This banking customer example message is about topping up by card. |
| transaction_charged_twice | This banking customer example message is about a transaction charged twice. |
| transfer_fee_charged | This banking customer example message is about an issue with a transfer fee charge. |
| transfer_into_account | This banking customer example message is about transfers into the customer's own account. |
| transfer_not_received_by_recipient | This banking customer example message is about a transfer that has not arrived yet. |
| transfer_timing | This banking customer example message is about transfer timing. |
| unable_to_verify_identity | This banking customer example message is about an issue with identity verification. |
| verify_my_identity | This banking customer example message is about identity verification. |
| verify_source_of_funds | This banking customer example message is about the source of funds. |
| verify_top_up | This banking customer example message is about verification and top ups |
| virtual_card_not_working | This banking customer example message is about a virtual card not working |
| visa_or_mastercard | This banking customer example message is about types of bank cards. |
| why_verify_identity | This banking customer example message questions why identity verification is necessary. |
| wrong_amount_of_cash_received | This banking customer example message is about a wrong amount of cash received. |
| wrong_exchange_rate_for_cash_withdrawal | This banking customer example message is about a wrong exchange rate for a cash withdrawal. |
#### trueteacher
| label | hypothesis |
|:-----------------------|:---------------------------------------------------------------------|
| factually_consistent | The example summary is factually consistent with the full article. |
| factually_inconsistent | The example summary is factually inconsistent with the full article. |
#### capsotu
| label | hypothesis |
|:----------------------|:----------------------------------------------------------------------------------------------------------|
| Agriculture | This example text from a US presidential speech is about agriculture |
| Civil Rights | This example text from a US presidential speech is about civil rights or minorities or civil liberties |
| Culture | This example text from a US presidential speech is about cultural policy |
| Defense | This example text from a US presidential speech is about defense or military |
| Domestic Commerce | This example text from a US presidential speech is about banking or finance or commerce |
| Education | This example text from a US presidential speech is about education |
| Energy | This example text from a US presidential speech is about energy or electricity or fossil fuels |
| Environment | This example text from a US presidential speech is about the environment or water or waste or pollution |
| Foreign Trade | This example text from a US presidential speech is about foreign trade |
| Government Operations | This example text from a US presidential speech is about government operations or administration |
| Health | This example text from a US presidential speech is about health |
| Housing | This example text from a US presidential speech is about community development or housing issues |
| Immigration | This example text from a US presidential speech is about migration |
| International Affairs | This example text from a US presidential speech is about international affairs or foreign aid |
| Labor | This example text from a US presidential speech is about employment or labour |
| Law and Crime | This example text from a US presidential speech is about law, crime or family issues |
| Macroeconomics | This example text from a US presidential speech is about macroeconomics |
| Public Lands | This example text from a US presidential speech is about public lands or water management |
| Social Welfare | This example text from a US presidential speech is about social welfare |
| Technology | This example text from a US presidential speech is about space or science or technology or communications |
| Transportation | This example text from a US presidential speech is about transportation |
#### manifesto
| label | hypothesis |
|:-------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Agriculture and Farmers: Positive | This example text from a political party manifesto is positive towards policies for agriculture and farmers |
| Anti-Growth Economy: Positive | This example text from a political party manifesto is in favour of anti-growth politics |
| Anti-Imperialism | This example text from a political party manifesto is anti-imperialistic, for example against controlling other countries and for greater self-government of colonies |
| Centralisation | This example text from a political party manifesto is in favour of political centralisation |
| Civic Mindedness: Positive | This example text from a political party manifesto is positive towards national solidarity, civil society or appeals for public spiritedness or against anti-social attitudes |
| Constitutionalism: Negative | This example text from a political party manifesto is positive towards constitutionalism |
| Constitutionalism: Positive | This example text from a political party manifesto is positive towards constitutionalism and the status quo of the constitution |
| Controlled Economy | This example text from a political party manifesto is supportive of direct government control of the economy, e.g. price control or minimum wages |
| Corporatism/Mixed Economy | This example text from a political party manifesto is positive towards cooperation of government, employers, and trade unions simultaneously |
| Culture: Positive | This example text from a political party manifesto is in favour of cultural policies or leisure facilities, for example museus, libraries or public sport clubs |
| Decentralization | This example text from a political party manifesto is for decentralisation or federalism |
| Democracy | This example text from a political party manifesto favourably mentions democracy or democratic procedures or institutions |
| Economic Goals | This example text from a political party manifesto is a broad/general statement on economic goals without specifics |
| Economic Growth: Positive | This example text from a political party manifesto is supportive of economic growth, for example facilitation of more production or government aid for growth |
| Economic Orthodoxy | This example text from a political party manifesto is for economic orthodoxy, for example reduction of budget deficits, thrift or a strong currency |
| Economic Planning | This example text from a political party manifesto is positive towards government economic planning, e.g. policy plans or strategies |
| Education Expansion | This example text from a political party manifesto is about the need to expand/improve policy on education |
| Education Limitation | This example text from a political party manifesto is sceptical towards state expenditure on education, for example in favour of study fees or private schools |
| Environmental Protection | This example text from a political party manifesto is in favour of environmental protection, e.g. fighting climate change or 'green' policies or preservation of natural resources or animal rights |
| Equality: Positive | This example text from a political party manifesto is positive towards equality or social justice, e.g. protection of underprivileged groups or fair distribution of resources |
| European Community/Union: Negative | This example text from a political party manifesto negatively mentions the EU or European Community |
| European Community/Union: Positive | This example text from a political party manifesto is positive towards the EU or European Community, for example EU expansion and integration |
| Foreign Special Relationships: Negative | This example text from a political party manifesto is negative towards particular countries |
| Foreign Special Relationships: Positive | This example text from a political party manifesto is positive towards particular countries |
| Free Market Economy | This example text from a political party manifesto is in favour of a free market economy and capitalism |
| Freedom and Human Rights | This example text from a political party manifesto is in favour of freedom and human rights, for example freedom of speech, assembly or against state coercion or for individualism |
| Governmental and Administrative Efficiency | This example text from a political party manifesto is in favour of efficiency in government/administration, for example by restructuring civil service or improving bureaucracy |
| Incentives: Positive | This example text from a political party manifesto is favourable towards supply side economic policies supporting businesses, for example for incentives like subsidies or tax breaks |
| Internationalism: Negative | This example text from a political party manifesto is sceptical of internationalism, for example negative towards international cooperation, in favour of national sovereignty and unilaterialism |
| Internationalism: Positive | This example text from a political party manifesto is in favour of international cooperation with other countries, for example mentions the need for aid to developing countries, or global governance |
| Keynesian Demand Management | This example text from a political party manifesto is for keynesian demand management and demand side economic policies |
| Labour Groups: Negative | This example text from a political party manifesto is negative towards labour groups and unions |
| Labour Groups: Positive | This example text from a political party manifesto is positive towards labour groups, for example for good working conditions, fair wages or unions |
| Law and Order: Positive | This example text from a political party manifesto is positive towards law and order and strict law enforcement |
| Market Regulation | This example text from a political party manifesto is supports market regulation for a fair and open market, for example for consumer protection or for increased competition or for social market economy |
| Marxist Analysis | This example text from a political party manifesto is positive towards Marxist-Leninist ideas or uses specific Marxist terminology |
| Middle Class and Professional Groups | This example text from a political party manifesto favourably references the middle class, e.g. white colar groups or the service sector |
| Military: Negative | This example text from a political party manifesto is negative towards the military, for example for decreasing military spending or disarmament |
| Military: Positive | This example text from a political party manifesto is positive towards the military, for example for military spending or rearmament or military treaty obligations |
| Multiculturalism: Negative | This example text from a political party manifesto is sceptical towards multiculturalism, or for cultural integration or appeals to cultural homogeneity in society |
| Multiculturalism: Positive | This example text from a political party manifesto favourably mentions cultural diversity, for example for freedom of religion or linguistic heritages |
| National Way of Life: Negative | This example text from a political party manifesto unfavourably mentions a country's nation and history, for example sceptical towards patriotism or national pride |
| National Way of Life: Positive | This example text from a political party manifesto is positive towards the national way of life and history, for example pride of citizenship or appeals to patriotism |
| Nationalisation | This example text from a political party manifesto is positive towards government ownership of industries or land or for economic nationalisation |
| Non-economic Demographic Groups | This example text from a political party manifesto favourably mentions non-economic demographic groups like women, students or specific age groups |
| Peace | This example text from a political party manifesto is positive towards peace and peaceful means of solving crises, for example in favour of negotiations and ending wars |
| Political Authority | This example text from a political party manifesto mentions the speaker's competence to govern or other party's lack of such competence, or favourably mentions a strong/stable government |
| Political Corruption | This example text from a political party manifesto is negative towards political corruption or abuse of political/bureaucratic power |
| Protectionism: Negative | This example text from a political party manifesto is negative towards protectionism, in favour of free trade |
| Protectionism: Positive | This example text from a political party manifesto is in favour of protectionism, for example tariffs, export subsidies |
| Technology and Infrastructure: Positive | This example text from a political party manifesto is about technology and infrastructure, e.g. the importance of modernisation of industry, or supportive of public spending on infrastructure/tech |
| Traditional Morality: Negative | This example text from a political party manifesto is negative towards traditional morality, for example against religious moral values, for divorce or abortion, for modern families or separation of church and state |
| Traditional Morality: Positive | This example text from a political party manifesto is favourable towards traditional or religious values, for example for censorship of immoral behavour, for traditional family values or religious institutions |
| Underprivileged Minority Groups | This example text from a political party manifesto favourably mentions underprivileged minorities, for example handicapped, homosexuals or immigrants |
| Welfare State Expansion | This example text from a political party manifesto is positive towards the welfare state, e.g. health care, pensions or social housing |
| Welfare State Limitation | This example text from a political party manifesto is for limiting the welfare state, for example public funding for social services or social security, e.g. private care before state care | | null | transformers | zero-shot-classification | null | null | null | null | null | null | null | null | null | MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33 | [
-0.30118751525878906,
-0.6360240578651428,
0.39659368991851807,
0.0016846314538270235,
-0.14944541454315186,
-0.04272819310426712,
0.11359844356775284,
-0.48929017782211304,
0.3401908576488495,
0.23426924645900726,
-0.4957200884819031,
-0.80556720495224,
-0.837732195854187,
0.0216158069670... |
vinayvemuri/faq_qa_model | vinayvemuri | 2023-11-29T23:45:38Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T23:45:38Z | 2023-11-26T19:12:06.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: faq_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# faq_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 4.9320 |
| No log | 2.0 | 12 | 3.9422 |
| No log | 3.0 | 18 | 3.2712 |
| No log | 4.0 | 24 | 3.0726 |
| No log | 5.0 | 30 | 2.9938 |
| No log | 6.0 | 36 | 3.1028 |
| No log | 7.0 | 42 | 2.8811 |
| No log | 8.0 | 48 | 3.2465 |
| No log | 9.0 | 54 | 3.3097 |
| No log | 10.0 | 60 | 3.2337 |
| No log | 11.0 | 66 | 3.3950 |
| No log | 12.0 | 72 | 3.3698 |
| No log | 13.0 | 78 | 3.3528 |
| No log | 14.0 | 84 | 3.4233 |
| No log | 15.0 | 90 | 3.4698 |
| No log | 16.0 | 96 | 3.4321 |
| No log | 17.0 | 102 | 3.4550 |
| No log | 18.0 | 108 | 3.5227 |
| No log | 19.0 | 114 | 3.5257 |
| No log | 20.0 | 120 | 3.5354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | question-answering | null | null | null | null | null | null | null | null | null | vinayvemuri/faq_qa_model | [
-0.5342686772346497,
-0.682330846786499,
0.14910916984081268,
0.17190203070640564,
-0.21903473138809204,
-0.20698459446430206,
0.052675943821668625,
-0.0907558724284172,
0.22965717315673828,
0.3068501651287079,
-0.8141646981239319,
-0.7449055314064026,
-0.8061203360557556,
-0.1862512826919... |
ShynBui/my_awesome_model | ShynBui | 2023-11-29T13:35:38Z | 4 | 0 | null | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:vinai/phobert-base-v2",
"endpoints_compatible",
"region:us"
] | 2023-11-29T13:35:38Z | 2023-11-27T18:36:35.000Z | null | null | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_keras_callback
model-index:
- name: ShynBui/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ShynBui/my_awesome_model
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0041
- Validation Loss: 0.0044
- Train Accuracy: 0.9984
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20544, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0408 | 0.0113 | 0.9961 | 0 |
| 0.0113 | 0.0124 | 0.9965 | 1 |
| 0.0041 | 0.0044 | 0.9984 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | ShynBui/my_awesome_model | [
-0.5690155625343323,
-0.525709331035614,
0.26395362615585327,
0.08570858836174011,
-0.4549550414085388,
-0.4426042437553406,
-0.0781465470790863,
-0.2957792580127716,
0.12143663316965103,
0.09539668262004852,
-0.5454360842704773,
-0.5949760675430298,
-0.6339471340179443,
-0.309933900833129... |
SharatChandra/whisper-fine-banking-dataset | SharatChandra | 2023-11-29T05:19:21Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | 2023-11-29T05:19:21Z | 2023-11-28T08:00:18.000Z | null | null | ---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-fine-banking-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-fine-banking-dataset
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6014
- Wer: 96.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0001 | 32.26 | 1000 | 0.5225 | 96.6399 |
| 0.0 | 64.52 | 2000 | 0.5617 | 96.7495 |
| 0.0 | 96.77 | 3000 | 0.5931 | 96.7495 |
| 0.0 | 129.03 | 4000 | 0.6014 | 96.7495 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | SharatChandra/whisper-fine-banking-dataset | [
-0.35039022564888,
-0.712043285369873,
-0.012375878170132637,
0.07974682748317719,
-0.37765464186668396,
-0.46125108003616333,
-0.10507832467556,
-0.2854855954647064,
0.04852442443370819,
0.5919066667556763,
-0.6922072172164917,
-0.7089511752128601,
-0.7525351047515869,
-0.3677444756031036... |
Liberty-L/Multiple_Choice_swag | Liberty-L | 2023-11-29T11:46:27Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:Liberty-L/Multiple_Choice_swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T11:46:27Z | 2023-11-28T14:33:28.000Z | null | null | ---
license: apache-2.0
base_model: Liberty-L/Multiple_Choice_swag
tags:
- generated_from_trainer
model-index:
- name: Multiple_Choice_swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multiple_Choice_swag
This model is a fine-tuned version of [Liberty-L/Multiple_Choice_swag](https://huggingface.co/Liberty-L/Multiple_Choice_swag) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | multiple-choice | null | null | null | null | null | null | null | null | null | Liberty-L/Multiple_Choice_swag | [
-0.6341099739074707,
-0.7227507829666138,
0.1168273314833641,
-0.02696591056883335,
-0.45075371861457825,
-0.21103189885616302,
-0.09029679000377655,
-0.3136507570743561,
0.2395176887512207,
0.3742040693759918,
-0.8982856273651123,
-0.41287246346473694,
-0.5064631104469299,
-0.170126676559... |
scfengv/TCBert_FT_Train274_Val60-0 | scfengv | 2023-11-29T01:32:38Z | 4 | 0 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"zh",
"dataset:scfengv/autotrain-data-TCBert_FT_Train274_Val60",
"endpoints_compatible",
"region:us"
] | 2023-11-29T01:32:38Z | 2023-11-28T15:34:03.000Z | null | null | ---
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
datasets:
- scfengv/autotrain-data-TCBert_FT_Train274_Val60
language:
- zh
---
# Model Tasks
- Problem type: Text Classification
## Validation Metrics
loss: 0.736057698726654
f1_macro: 0.7235195267863145
f1_micro: 0.7333333333333333
f1_weighted: 0.7235195267863145
precision_macro: 0.7594785575048734
precision_micro: 0.7333333333333333
precision_weighted: 0.7594785575048733
recall_macro: 0.7333333333333333
recall_micro: 0.7333333333333333
recall_weighted: 0.7333333333333333
accuracy: 0.7333333333333333 | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | scfengv/TCBert_FT_Train274_Val60-0 | [
0.019620411098003387,
-0.3523107171058655,
0.6022941470146179,
0.6242885589599609,
-0.04019799456000328,
-0.26376980543136597,
-0.04881765693426132,
0.2390602082014084,
-0.023510495200753212,
0.27097558975219727,
-0.6377961039543152,
-0.7324885725975037,
-1.010682463645935,
0.0924617126584... |
TeeA/Donut-VNChart | TeeA | 2023-11-29T16:24:58Z | 4 | 0 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | 2023-11-29T16:24:58Z | 2023-11-28T16:09:50.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | TeeA/Donut-VNChart | [
-0.3227651119232178,
-0.22568456828594208,
0.8622261881828308,
0.43461447954177856,
-0.5282989740371704,
0.7012965083122253,
0.7915719747543335,
0.0761861652135849,
0.7746025323867798,
0.25632235407829285,
-0.7852817177772522,
-0.22573819756507874,
-0.9104477763175964,
0.5715669393539429,
... |
MLlabs2023/sentiment_model | MLlabs2023 | 2023-11-29T12:43:46Z | 4 | 0 | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T12:43:46Z | 2023-11-28T20:49:55.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: MLlabs2023/sentiment_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MLlabs2023/sentiment_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2052
- Train Accuracy: 0.9270
- Validation Loss: 0.2959
- Validation Accuracy: 0.8890
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4494 | 0.7870 | 0.3609 | 0.8590 | 0 |
| 0.2052 | 0.9270 | 0.2959 | 0.8890 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | MLlabs2023/sentiment_model | [
-0.6793493032455444,
-0.6884152889251709,
0.3182799816131592,
0.2815110385417938,
-0.47680220007896423,
-0.3745608627796173,
-0.2016262710094452,
-0.07273072749376297,
0.18995240330696106,
0.14292684197425842,
-0.7983434200286865,
-0.8360599279403687,
-0.8763754963874817,
-0.17090138792991... |
yukiarimo/yuna-emotion | yukiarimo | 2023-11-29T02:46:10Z | 4 | 1 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"distilroberta",
"sentiment",
"emotion",
"en",
"endpoints_compatible",
"region:us"
] | 2023-11-29T02:46:10Z | 2023-11-29T02:43:56.000Z | null | null | ---
language: "en"
tags:
- distilroberta
- sentiment
- emotion
widget:
- text: "Oh wow. I didn't know that."
- text: "This movie always makes me cry.."
- text: "Oh Happy Day"
---
# Yuna Emotion
This is an AGI model for Yuna AI. 7 emotions are available:
1) anger 🤬
2) disgust 🤢
3) fear 😨
4) joy 😀
5) neutral 😐
6) sadness 😭
7) surprise 😲
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | yukiarimo/yuna-emotion | [
-0.42731860280036926,
-0.30682477355003357,
0.21346616744995117,
0.7646508812904358,
-0.49162808060646057,
-0.19317670166492462,
0.6393609046936035,
-0.42053860425949097,
0.5633893013000488,
0.24669304490089417,
-0.630897045135498,
-0.39544597268104553,
-0.7592344880104065,
0.2658029198646... |
GraydientPlatformAPI/kikimix-am | GraydientPlatformAPI | 2023-11-29T03:10:30Z | 4 | 0 | null | [
"diffusers",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-11-29T03:10:30Z | 2023-11-29T03:02:33.000Z | null | null | ---
license: openrail
---
| null | diffusers | null | null | null | null | null | null | null | null | null | null | GraydientPlatformAPI/kikimix-am | [
-0.12853394448757172,
-0.1861674040555954,
0.6529132127761841,
0.49436259269714355,
-0.19319306313991547,
0.2360745519399643,
0.36071985960006714,
0.050563570111989975,
0.5793654918670654,
0.7400139570236206,
-0.6508103013038635,
-0.23783987760543823,
-0.7102250456809998,
-0.04782596975564... |
Aleksia/finetuning-distilBert_sentiment | Aleksia | 2023-11-29T05:05:44Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T05:05:44Z | 2023-11-29T03:14:44.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilBert_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilBert_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Accuracy: 0.9148
- F1: 0.9149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | Aleksia/finetuning-distilBert_sentiment | [
-0.5570462346076965,
-0.7109775543212891,
0.2030184119939804,
0.3360176086425781,
-0.4938681125640869,
-0.25903213024139404,
-0.24271830916404724,
-0.021346405148506165,
0.11914964765310287,
0.16726545989513397,
-0.7098782062530518,
-0.6835281848907471,
-0.8551519513130188,
-0.054833538830... |
notbdq/gpt2-medium-turkish-alpaca | notbdq | 2023-11-29T04:10:30Z | 4 | 1 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"tr",
"dataset:emre/stanford-alpaca-cleaned-turkish-translated",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T04:10:30Z | 2023-11-29T03:35:37.000Z | null | null | ---
license: apache-2.0
datasets:
- emre/stanford-alpaca-cleaned-turkish-translated
language:
- tr
library_name: transformers
---
yapay zeka kullanımı ve kodları için : https://github.com/selahattinbakidamar/gpt2-medium-turkish-alpaca/ | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | notbdq/gpt2-medium-turkish-alpaca | [
-0.5959521532058716,
-0.5778450965881348,
0.3273678123950958,
0.43737852573394775,
-0.8698040843009949,
-0.32419705390930176,
0.02412945218384266,
-0.6079708337783813,
0.9410451054573059,
0.4802018105983734,
-0.48688140511512756,
-0.5323673486709595,
-0.7650917172431946,
-0.103321149945259... |
Matupom/thainer-corpus-v2-dataset-old | Matupom | 2023-11-29T05:01:29Z | 4 | 0 | null | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T05:01:29Z | 2023-11-29T05:01:12.000Z | null | null | Entry not found | null | transformers | token-classification | null | null | null | null | null | null | null | null | null | Matupom/thainer-corpus-v2-dataset-old | [
-0.3227648138999939,
-0.22568409144878387,
0.8622261881828308,
0.43461495637893677,
-0.5282989740371704,
0.7012965083122253,
0.7915717959403992,
0.07618632167577744,
0.7746028304100037,
0.2563219666481018,
-0.7852813601493835,
-0.22573833167552948,
-0.9104479551315308,
0.5715669393539429,
... |
sinonimayzer/roberta-1.8-v2 | sinonimayzer | 2023-11-29T05:24:27Z | 4 | 0 | null | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"uz",
"dataset:sinonimayzer/mixed-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T05:24:27Z | 2023-11-29T05:13:59.000Z | null | null | ---
widget:
- text: Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi.
example_title: Example 1
- text: >-
Shu munosabat bilan O‘zbekiston Prezidenti global inqiroz sharoitida savdo-iqtisodiy hamkorlikni <mask> va hududlararo aloqalarni rivojlantirishning muhim masalalariga to‘xtalib o‘tdi.
example_title: Example 2
datasets:
- sinonimayzer/mixed-data
language:
- uz
pipeline_tag: fill-mask
---
Train/learning rate chart
<img src="https://huggingface.co/sinonimayzer/roberta-1.8-v2/resolve/main/train-learning_rate.png">
Train/loss rate chart
<img src="https://huggingface.co/sinonimayzer/roberta-1.8-v2/resolve/main/train-loss.png"> | null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | sinonimayzer/roberta-1.8-v2 | [
-0.2143578678369522,
-0.2891266345977783,
0.24977298080921173,
0.27530258893966675,
-0.4782731235027313,
0.012661140412092209,
0.04690080136060715,
-0.21806767582893372,
0.6338438987731934,
0.3074694573879242,
-0.8619180917739868,
-0.3903854489326477,
-1.0349452495574951,
-0.50199723243713... |
PracticeLLM/Custom-KoLLM-13B-v6 | PracticeLLM | 2023-11-29T20:47:36Z | 4 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/ko-gu-platyorca-mergeset",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T20:47:36Z | 2023-11-29T05:31:33.000Z | null | null | ---
language:
- ko
datasets:
- kyujinpy/ko-gu-platyorca-mergeset
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **⭐My custom LLM 13B⭐**
## Model Details
**Model Developers**
- Kyujin Han (kyujinpy)
**Model Architecture**
- My custom LLM 13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
- [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
**Training Dataset**
- [kyujinpy/ko-gu-platyorca-mergeset](https://huggingface.co/datasets/kyujinpy/ko-gu-platyorca-mergeset).
---
# Model comparisons
> Ko-LLM leaderboard(11/27; [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard))
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| ⭐My custom LLM 13B-v1⭐ | **50.19** | **45.99** | 56.93 | 41.78 | 41.66 | **64.58** |
| ⭐My custom LLM 13B-v4⭐ | 49.89 | 45.05 | **57.06** | 41.83 | **42.93** | 62.57 |
| **⭐My custom LLM 13B-v6⭐** | NaN | NaN | NaN | NaN | NaN | NaN |
---
# Model comparisons2
> AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness)
| Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot |
| ⭐My custom LLM 13B-v1⭐ | 0.7987 | 0.8269 | 0.4994 | 0.5660 | 0.3343 | 0.5060 | 0.6984 | 0.9723 |
| ⭐My custom LLM 13B-v4⭐** | **0.7988** | 0.8279 | **0.4995** | 0.4953 | 0.3343 | 0.3558 | **0.7825** | 0.9698 |
| **⭐My custom LLM 13B-v6⭐** | 0.7938 | 0.8259 | 0.4905 | 0.5620 | **0.8656** | 0.8457 | 0.3720 | 0.9698 |
| [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) | 0.7768 | 0.8128 | 0.4999 | 0.5127 | 0.3988 | 0.7038 | 0.5870 | 0.9748 |
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/Custom-KoLLM-13B-v6"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
``` | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | PracticeLLM/Custom-KoLLM-13B-v6 | [
-0.5664677023887634,
-0.5805453062057495,
0.21791353821754456,
0.47280606627464294,
-0.44134289026260376,
0.14788495004177094,
-0.08573108166456223,
-0.41843146085739136,
0.30122092366218567,
0.3694058060646057,
-0.6550390124320984,
-0.8057163953781128,
-0.807100772857666,
-0.0087195374071... |
karawalla/llama-2-7b-karawalla | karawalla | 2023-11-29T06:04:43Z | 4 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T06:04:43Z | 2023-11-29T05:59:41.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | karawalla/llama-2-7b-karawalla | [
-0.3227648138999939,
-0.22568409144878387,
0.8622261881828308,
0.43461495637893677,
-0.5282989740371704,
0.7012965083122253,
0.7915717959403992,
0.07618632167577744,
0.7746028304100037,
0.2563219666481018,
-0.7852813601493835,
-0.22573833167552948,
-0.9104479551315308,
0.5715669393539429,
... |
picas9dan/20231129_2_onnx_8bit | picas9dan | 2023-11-29T06:49:44Z | 4 | 0 | null | [
"transformers",
"onnx",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T06:49:44Z | 2023-11-29T06:22:09.000Z | null | null | Entry not found | null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | picas9dan/20231129_2_onnx_8bit | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
User1115/whisper-large-v2-test-singleWord-small-30steps | User1115 | 2023-11-29T08:36:42Z | 4 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"region:us"
] | 2023-11-29T08:36:42Z | 2023-11-29T08:36:30.000Z | null | null | ---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | User1115/whisper-large-v2-test-singleWord-small-30steps | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
Jarnails1559/Sikhism-gpt2 | Jarnails1559 | 2023-11-29T11:23:58Z | 4 | 0 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T11:23:58Z | 2023-11-29T10:41:44.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Jarnails1559/Sikhism-gpt2 | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
EricPeter/sw-text-classification-model | EricPeter | 2023-11-29T11:03:48Z | 4 | 0 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T11:03:48Z | 2023-11-29T11:03:09.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | EricPeter/sw-text-classification-model | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
Lakoc/gpt2_256h_8l_add_head5_04 | Lakoc | 2023-11-29T11:28:49Z | 4 | 0 | null | [
"transformers",
"gpt2-multi-head",
"endpoints_compatible",
"region:us"
] | 2023-11-29T11:28:49Z | 2023-11-29T11:28:47.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | Lakoc/gpt2_256h_8l_add_head5_04 | [
-0.3227651119232178,
-0.22568456828594208,
0.8622261881828308,
0.43461447954177856,
-0.5282989740371704,
0.7012965083122253,
0.7915719747543335,
0.0761861652135849,
0.7746025323867798,
0.25632235407829285,
-0.7852817177772522,
-0.22573819756507874,
-0.9104477763175964,
0.5715669393539429,
... |
TheBoefOfWallstreet/baseline_v2 | TheBoefOfWallstreet | 2023-11-29T11:40:17Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T11:40:17Z | 2023-11-29T11:35:07.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | TheBoefOfWallstreet/baseline_v2 | [
-0.3227651119232178,
-0.22568456828594208,
0.8622261881828308,
0.43461447954177856,
-0.5282989740371704,
0.7012965083122253,
0.7915719747543335,
0.0761861652135849,
0.7746025323867798,
0.25632235407829285,
-0.7852817177772522,
-0.22573819756507874,
-0.9104477763175964,
0.5715669393539429,
... |
vrhoward/esm2_t12_35M_UR50D-finetuned | vrhoward | 2023-11-29T16:03:39Z | 4 | 0 | null | [
"transformers",
"safetensors",
"esm",
"fill-mask",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2023-11-29T16:03:39Z | 2023-11-29T14:07:20.000Z | null | null | ---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_trainer
model-index:
- name: esm2_t12_35M_UR50D-finetuned
results: []
widget:
- text: "MFVFLVLLPLVSSQCVNLTTRTQLPPAYTNSFTRGVYYPDKVFRSSVLHSTQDLFLPFFSNVTWFHAIHVSGTNGTKRFDNPVLPFNDGVYFASTEKSNIIRGWIFGTTLDSKTQSLLIVNNATNVVIKVCEFQFCNDPFLGVYYHKNNKSWMESEFRVYSSANNCTFEYVSQPFLMDLEGKQGNFKNLREFVFKNIDGYFKIYSKHTPINLVRDLPQGFSALEPLVDLPIGINITRFQTLLALHRSYLTPGDSSSGWTAGAAAYYVGYLQPRTFLLKYNENGTITDAVDCALDPLSETKCTLKSFTVEKGIYQTSNFRVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNFNFNGLTGTGVLTESNKKFLPFQQFGRDIADTTDAVRDPQTLEILDITPCSFGGVSVITPGTNTSNQVAVLYQDVNCTEVPVAIHADQLTPTWRVYSTGSNVFQTRAGCLIGAEHVNNSYECDIPIGAGICASYQTQTNSPRRARSVASQSIIAYTMSLGAENSVAYSNNSIAIPTNFTISVTTEILPVSMTKTSVDCTMYICGDSTECSNLLLQYGSFCTQLNRALTGIAVEQDKNTQEVFAQVKQIYKTPPIKDFGGFNFSQILPDPSKPSKRS<mask>IEDLLFNKVTLADAGFIKQYGDCLGDIAARDLICAQKFNGLTVLPPLLTDEMIAQYTSALLAGTITSGWTFGAGAALQIPFAMQMAYRFNGIGVTQNVLYENQKLIANQFNSAIGKIQDSLSSTASALGKLQDVVNQNAQALNTLVKQLSSNFGAISSVLNDILSRLDKVEAEVQIDRLITGRLQSLQTYVTQQLIRAAEIRASANLAATKMSECVLGQSKRVDFCGKGYHLMSFPQSAPHGVVFLHVTYVPAQEKNFTTAPAICHDGKAHFPREGVFVSNGTHWFVTQRNFYEPQIITTDNTFVSGNCDVVIGIVNNTVYDPLQPELDSFKEELDKYFKNHTSPDVDLGDISGINASVVNIQKEIDRLNEVAKNLNESLIDLQELGKYEQYIKWPWYIWLGFIAGLIAIVMVTIMLCCMTSCCSCLKGCCSCGSCCKFDEDDSEPVLKGVKLHYT"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the [thermostableProteins](https://huggingface.co/datasets/vrhoward/thermostableProteins/viewer/default/train) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5701
- Perplexity: 1.77
## Model description
- Model Architecture: encoder-decoder transformer protein language model
- Fine-tuning Objective: unsupervised masked language modeling (MLM) objective to learn protein sequence mutations that increase protein stability
- Developer: Victoria Howard
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3904 | 1.0 | 607 | 0.8516 |
| 0.7755 | 2.0 | 1214 | 0.6255 |
| 0.6176 | 3.0 | 1821 | 0.5708 |
- Perplexity of original model: 7.83
- Perplexity of fine-tuned model: 1.77
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | vrhoward/esm2_t12_35M_UR50D-finetuned | [
-0.2975633442401886,
-0.6634838581085205,
-0.014106357470154762,
-0.027538394555449486,
-0.2232409417629242,
-0.21809469163417816,
-0.1265472024679184,
-0.28328001499176025,
0.19904732704162598,
0.5859276056289673,
-0.7795975804328918,
-0.6691182851791382,
-0.8215256929397583,
0.1524597853... |
ethompson93/ppo-LunarLander-v2 | ethompson93 | 2023-11-29T16:13:52Z | 4 | 0 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T16:13:52Z | 2023-11-29T14:20:06.000Z | null | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.56 +/- 17.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | ethompson93/ppo-LunarLander-v2 | [
-0.003174568060785532,
-0.3944118022918701,
0.24817675352096558,
0.3390538692474365,
-0.08787596970796585,
0.04007981717586517,
0.500053346157074,
-0.17607858777046204,
0.28882235288619995,
0.944482684135437,
-0.6269252300262451,
-0.5120340585708618,
-0.49809592962265015,
-0.27938362956047... |
antonyseabramedeiros/llama-2-7b-ContratosTI | antonyseabramedeiros | 2023-11-29T15:24:25Z | 4 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:24:25Z | 2023-11-29T15:16:34.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | antonyseabramedeiros/llama-2-7b-ContratosTI | [
-0.32276490330696106,
-0.2256845235824585,
0.8622258305549622,
0.4346151351928711,
-0.52829909324646,
0.7012964487075806,
0.791571855545044,
0.07618629187345505,
0.7746025323867798,
0.2563220262527466,
-0.7852813005447388,
-0.22573833167552948,
-0.9104480743408203,
0.5715667605400085,
-0... |
jjmcarrascosa/dqn-SpaceInvadersNoFrameskip-v4 | jjmcarrascosa | 2023-11-29T15:41:05Z | 4 | 0 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T15:41:05Z | 2023-11-29T15:40:30.000Z | null | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.50 +/- 149.86
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jjmcarrascosa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jjmcarrascosa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jjmcarrascosa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | jjmcarrascosa/dqn-SpaceInvadersNoFrameskip-v4 | [
-0.621124267578125,
-0.5604381561279297,
0.28333279490470886,
0.35646945238113403,
-0.1552129089832306,
-0.2336561679840088,
0.14262932538986206,
-0.18439486622810364,
0.18824511766433716,
0.31085872650146484,
-1.0176197290420532,
-0.48816731572151184,
-0.3584744334220886,
-0.0551593676209... |
A2H0H0R1/llama2-7B-gpt4 | A2H0H0R1 | 2023-11-29T16:41:41Z | 4 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-factory",
"lora",
"generated_from_trainer",
"dataset:A2H0H0R1/alpaca_data_gpt4_2",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"regi... | 2023-11-29T16:41:41Z | 2023-11-29T16:13:29.000Z | null | null | ---
license: other
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: 2023-11-29-06-20-56
results: []
datasets:
- A2H0H0R1/alpaca_data_gpt4_2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2023-11-29-06-20-56
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the alpaca_data_gpt4_2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
https://huggingface.co/A2H0H0R1/llama2-7B-gpt4/blob/main/training_loss.png
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | A2H0H0R1/llama2-7B-gpt4 | [
-0.4863970875740051,
-0.7683557271957397,
0.13998834788799286,
0.3645271062850952,
-0.5579046607017517,
-0.26747891306877136,
-0.027992649003863335,
-0.5351039171218872,
0.48608192801475525,
0.45102599263191223,
-0.8823182582855225,
-0.5628929734230042,
-0.7891068458557129,
0.0425544939935... |
bmistry4/a2c-PandaReachDense-v3 | bmistry4 | 2023-11-29T16:19:03Z | 4 | 0 | null | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T16:19:03Z | 2023-11-29T16:14:52.000Z | null | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | bmistry4/a2c-PandaReachDense-v3 | [
-0.32333603501319885,
-0.6825423836708069,
-0.02639610506594181,
0.6930292844772339,
0.02815971150994301,
-0.08575806021690369,
0.49541807174682617,
-0.34901463985443115,
0.4173521399497986,
0.6371418833732605,
-0.8893340229988098,
-0.4952719211578369,
-0.4364124536514282,
-0.0147031201049... |
optical908/distilBert-spam-Classifier | optical908 | 2023-11-29T16:24:26Z | 4 | 1 | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T16:24:26Z | 2023-11-29T16:23:59.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: distilBert-spam-Classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilBert-spam-Classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | optical908/distilBert-spam-Classifier | [
-0.5178030133247375,
-0.8260520696640015,
0.24770791828632355,
0.07362571358680725,
-0.5027915239334106,
-0.14022672176361084,
0.06459538638591766,
-0.19164083898067474,
-0.07171723991632462,
0.41860687732696533,
-0.483325332403183,
-0.6328621506690979,
-1.1511927843093872,
-0.187836736440... |
robin-weaver/me_vX_LoRA | robin-weaver | 2023-11-29T16:24:17Z | 4 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | 2023-11-29T16:24:17Z | 2023-11-29T16:24:13.000Z | null | null |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK person
license: openrail++
---
# SDXL LoRA DreamBooth - robin-weaver/me_vX_LoRA
<Gallery />
## Model description
These are robin-weaver/me_vX_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](robin-weaver/me_vX_LoRA/tree/main) them in the Files & versions tab.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | robin-weaver/me_vX_LoRA | [
-0.0948193222284317,
-0.39035072922706604,
0.5360764265060425,
0.08160308748483658,
-0.5427722334861755,
0.15864764153957367,
0.29061686992645264,
-0.3158179223537445,
0.6367608904838562,
0.626632034778595,
-0.793143630027771,
-0.503654956817627,
-0.8884113430976868,
-0.12859278917312622,
... |
fd3v/Raymond-Reddington | fd3v | 2023-11-29T19:43:11Z | 4 | 0 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"not-for-all-audiences",
"en",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T19:43:11Z | 2023-11-29T16:54:37.000Z | null | null | ---
license: apache-2.0
language:
- en
- ar
pipeline_tag: text-generation
tags:
- not-for-all-audiences
--- | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | fd3v/Raymond-Reddington | [
-0.1285340040922165,
-0.1861676573753357,
0.6529127955436707,
0.49436259269714355,
-0.19319328665733337,
0.23607435822486877,
0.36072009801864624,
0.05056355893611908,
0.579365611076355,
0.7400140166282654,
-0.6508103609085083,
-0.23783960938453674,
-0.7102246284484863,
-0.0478256717324256... |
Wu2940/experiments | Wu2940 | 2023-11-29T17:18:32Z | 4 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | 2023-11-29T17:18:32Z | 2023-11-29T17:18:29.000Z | null | null | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | Wu2940/experiments | [
-0.5759305953979492,
-0.5910220146179199,
0.40387964248657227,
0.09421482682228088,
-0.2948668897151947,
-0.2313520312309265,
0.02362118847668171,
-0.5023764967918396,
0.023363202810287476,
0.5719980001449585,
-0.717378556728363,
-0.591495931148529,
-0.5747221112251282,
-0.0442091710865497... |
imadejski/bhr_descriptions_trained_distillRoBERTa | imadejski | 2023-11-29T17:51:42Z | 4 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:51:42Z | 2023-11-29T17:50:45.000Z | null | null | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: bhr_descriptions_trained_distillRoBERTa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhr_descriptions_trained_distillRoBERTa
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 48 | 1.7029 |
| No log | 2.0 | 96 | 1.5455 |
| No log | 3.0 | 144 | 1.5840 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | imadejski/bhr_descriptions_trained_distillRoBERTa | [
-0.274789422750473,
-0.6700605750083923,
0.14844068884849548,
0.24958044290542603,
-0.4441080391407013,
-0.31507202982902527,
-0.033796995878219604,
-0.24024181067943573,
-0.017463652417063713,
0.18684983253479004,
-0.695838212966919,
-0.6010012030601501,
-0.6835622191429138,
-0.0619035884... |
tempertrash/corgy_dog_LoRA | tempertrash | 2023-11-29T18:11:47Z | 4 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | 2023-11-29T18:11:47Z | 2023-11-29T18:09:22.000Z | null | null |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - tempertrash/corgy_dog_LoRA
<Gallery />
## Model description
These are tempertrash/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](tempertrash/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | tempertrash/corgy_dog_LoRA | [
-0.2491767406463623,
-0.28030726313591003,
0.3662928342819214,
0.23661278188228607,
-0.6408252716064453,
0.11764949560165405,
0.16843923926353455,
-0.29706743359565735,
0.5994877219200134,
0.43321195244789124,
-0.5952311158180237,
-0.662898063659668,
-0.6166491508483887,
-0.237887129187583... |
zostrich/spaceInvader_agent | zostrich | 2023-11-29T18:37:19Z | 4 | 0 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T18:37:19Z | 2023-11-29T18:36:51.000Z | null | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 5.00 +/- 7.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zostrich -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zostrich -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zostrich
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | zostrich/spaceInvader_agent | [
-0.6224572658538818,
-0.5499594807624817,
0.2860104739665985,
0.3624403774738312,
-0.16275663673877716,
-0.25243067741394043,
0.13653972744941711,
-0.19055919349193573,
0.16716159880161285,
0.3068731725215912,
-1.016994833946228,
-0.5053579807281494,
-0.3614364266395569,
-0.053167596459388... |
Kennedy-Juma/model.safetensors | Kennedy-Juma | 2023-11-29T19:05:29Z | 4 | 0 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T19:05:29Z | 2023-11-29T18:47:06.000Z | null | null | ---
license: mit
---
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | Kennedy-Juma/model.safetensors | [
-0.1285340040922165,
-0.1861676573753357,
0.6529127955436707,
0.49436259269714355,
-0.19319328665733337,
0.23607435822486877,
0.36072009801864624,
0.05056355893611908,
0.579365611076355,
0.7400140166282654,
-0.6508103609085083,
-0.23783960938453674,
-0.7102246284484863,
-0.0478256717324256... |
YoungMeng/dqn-MsPacmanNoFrameskip-v4 | YoungMeng | 2023-11-29T18:50:24Z | 4 | 0 | null | [
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T18:50:24Z | 2023-11-29T18:50:00.000Z | null | null | ---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 98.00 +/- 25.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga YoungMeng -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga YoungMeng -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/ -orga YoungMeng
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | YoungMeng/dqn-MsPacmanNoFrameskip-v4 | [
-0.6592769622802734,
-0.628593385219574,
0.1312970370054245,
0.3377085328102112,
-0.2380487024784088,
-0.2724461257457733,
0.005627728998661041,
-0.33375421166419983,
0.06864190846681595,
0.2775833308696747,
-0.9304145574569702,
-0.5656803250312805,
-0.4304199814796448,
0.10430342704057693... |
roninai1/ppo-LunarLander-v2 | roninai1 | 2023-11-29T21:05:28Z | 4 | 0 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T21:05:28Z | 2023-11-29T20:18:58.000Z | null | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.06 +/- 14.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | roninai1/ppo-LunarLander-v2 | [
-0.00317479413934052,
-0.3944118022918701,
0.24817687273025513,
0.33905404806137085,
-0.08787575364112854,
0.04008011892437935,
0.5000529289245605,
-0.17607857286930084,
0.28882235288619995,
0.944482684135437,
-0.6269252300262451,
-0.5120340585708618,
-0.49809563159942627,
-0.2793833911418... |
DTAI-KULeuven/robbertje-1-gb-shuffled | DTAI-KULeuven | 2023-11-29T10:55:24Z | 3 | 0 | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"nl",
"arxiv:2101.05716",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T10:55:24Z | 2022-03-02T23:29:04.000Z | null | null | ---
language: "nl"
thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
- RobBERTje
license: mit
datasets:
- oscar
- oscar (NL)
- dbrd
- lassy-ud
- europarl-mono
- conll2002
widget:
- text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | this model |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
| null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | DTAI-KULeuven/robbertje-1-gb-shuffled | [
-0.44296160340309143,
-0.44926917552948,
0.3658086061477661,
0.19858169555664062,
-0.3686120808124542,
-0.20430950820446014,
0.01659044250845909,
-0.587418794631958,
0.4271332323551178,
0.04201071336865425,
-0.40468472242355347,
-0.29685866832733154,
-1.0253468751907349,
0.1852880418300628... |
propet/a2c-PandaReachDense-v2 | propet | 2023-11-29T17:11:13Z | 3 | 0 | null | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | 2023-11-29T17:11:13Z | 2023-03-16T21:08:33.000Z | null | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.32 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) | null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | propet/a2c-PandaReachDense-v2 | [
-0.2802620530128479,
-0.6863573789596558,
-0.12502621114253998,
0.5915706753730774,
-0.02465307153761387,
-0.17236456274986267,
0.3861331641674042,
-0.2928096652030945,
0.3687204420566559,
0.4729240834712982,
-0.8025069236755371,
-0.44108933210372925,
-0.45757973194122314,
0.00435716751962... |
ludis/tsukasa-7b-lora | ludis | 2023-11-29T15:16:38Z | 3 | 0 | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:16:38Z | 2023-10-02T02:37:05.000Z | null | null | ## Prompting
https://rentry.org/tsukasa13b - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (mistral-0.1-7b)
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a40 gpu cluster.
the a40 GPU cluster has been graciously provided by [Arc Compute](https://www.arccompute.io/).
rank 8 lora tune of mistralai/Mistral-7B-v0.1, first tuned on koishi commit 6e675d1 for one epoch then on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-30 for 2 epochs in metharme format
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | ludis/tsukasa-7b-lora | [
-0.5720891356468201,
-0.6007150411605835,
0.4406437575817108,
0.11322198063135147,
-0.3136017918586731,
-0.18129262328147888,
0.022369474172592163,
-0.18056637048721313,
-0.006230256054550409,
0.3797571659088135,
-0.9272043108940125,
-0.40899914503097534,
-0.36724188923835754,
0.1654990762... |
sammyj4148/cu-go-bart-large-xsum | sammyj4148 | 2023-11-30T01:27:15Z | 3 | 0 | null | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-30T01:27:15Z | 2023-10-31T18:48:40.000Z | null | null | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: cu-go-bart-large-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cu-go-bart-large-xsum
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | sammyj4148/cu-go-bart-large-xsum | [
-0.44002363085746765,
-0.7884449362754822,
0.3820990025997162,
0.03711672127246857,
-0.3774828612804413,
-0.11876931041479111,
-0.29268085956573486,
-0.29024800658226013,
0.597366452217102,
0.43013325333595276,
-0.788558304309845,
-0.4283229410648346,
-0.5922242999076843,
-0.14903774857521... |
SiRoZaRuPa/longpause-1b-1110-1 | SiRoZaRuPa | 2023-11-29T08:50:49Z | 3 | 0 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:SiRoZaRuPa/1b-1023-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T08:50:49Z | 2023-11-10T05:51:02.000Z | null | null | ---
license: apache-2.0
base_model: SiRoZaRuPa/1b-1023-1
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: longpause-1b-1110-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longpause-1b-1110-1
This model is a fine-tuned version of [SiRoZaRuPa/1b-1023-1](https://huggingface.co/SiRoZaRuPa/1b-1023-1) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6885
- Cer: 0.3400
## Model description
This model is not capable of inferring tags at the beginning and end of a sentence.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.5038 | 12.2 | 500 | 4.1586 | 0.9984 |
| 3.8187 | 24.39 | 1000 | 3.8006 | 0.9947 |
| 1.2721 | 36.59 | 1500 | 1.8132 | 0.4093 |
| 0.7793 | 48.78 | 2000 | 1.6701 | 0.3805 |
| 0.5918 | 60.98 | 2500 | 1.6559 | 0.3779 |
| 0.4991 | 73.17 | 3000 | 1.6485 | 0.3449 |
| 0.437 | 85.37 | 3500 | 1.6716 | 0.3403 |
| 0.3924 | 97.56 | 4000 | 1.6669 | 0.3410 |
| 0.3684 | 109.76 | 4500 | 1.6903 | 0.3420 |
| 0.353 | 121.95 | 5000 | 1.6855 | 0.3389 |
| 0.3465 | 134.15 | 5500 | 1.6864 | 0.3401 |
| 0.343 | 146.34 | 6000 | 1.6885 | 0.3400 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | SiRoZaRuPa/longpause-1b-1110-1 | [
-0.5246485471725464,
-0.5073400735855103,
0.0930117517709732,
0.21195517480373383,
-0.23618921637535095,
-0.30398258566856384,
-0.027940450236201286,
-0.3128334581851959,
0.26538246870040894,
0.4160877466201782,
-0.8808956742286682,
-0.6131951212882996,
-0.659369945526123,
-0.1777793318033... |
reza-alipour/vq-tokenizer | reza-alipour | 2023-11-29T09:15:15Z | 3 | 0 | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | 2023-11-29T09:15:15Z | 2023-11-20T16:19:27.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | reza-alipour/vq-tokenizer | [
-0.3227651119232178,
-0.22568456828594208,
0.8622261881828308,
0.43461447954177856,
-0.5282989740371704,
0.7012965083122253,
0.7915719747543335,
0.0761861652135849,
0.7746025323867798,
0.25632235407829285,
-0.7852817177772522,
-0.22573819756507874,
-0.9104477763175964,
0.5715669393539429,
... |
finiteautomata/bert-base-spanish-wwm-cased-reranker | finiteautomata | 2023-11-29T14:31:46Z | 3 | 1 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T14:31:46Z | 2023-11-23T14:17:06.000Z | null | null | ---
{}
---
# Reranker with bert
## Metrics
| Metric | Value |
| ------ | ----- |
| MRR | 0.653 |
| MRR Grouped | 0.703 |
| Accuracy | 0.575 |
| Accuracy Grouped | 0.632 | | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | finiteautomata/bert-base-spanish-wwm-cased-reranker | [
-0.019037580117583275,
-0.2756956219673157,
0.13914242386817932,
0.48300638794898987,
-0.27057787775993347,
0.18182426691055298,
-0.046584419906139374,
-0.05910569429397583,
0.7953835129737854,
-0.31998372077941895,
-0.25042271614074707,
-0.7900420427322388,
-0.9577946662902832,
-0.1089217... |
finiteautomata/bert-base-spanish-wwm-uncased-reranker | finiteautomata | 2023-11-29T14:50:45Z | 3 | 0 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T14:50:45Z | 2023-11-23T16:10:08.000Z | null | null | ---
{}
---
# Reranker with bert
## Metrics
| Metric | Value |
| ------ | ----- |
| MRR | 0.637 |
| MRR Grouped | 0.703 |
| Accuracy | 0.558 |
| Accuracy Grouped | 0.628 | | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | finiteautomata/bert-base-spanish-wwm-uncased-reranker | [
-0.015068842098116875,
-0.2566784620285034,
0.1491144448518753,
0.49898561835289,
-0.2706972062587738,
0.1829453855752945,
-0.03175348415970802,
-0.044363051652908325,
0.8263749480247498,
-0.2861880362033844,
-0.24888727068901062,
-0.7593788504600525,
-0.9531916975975037,
-0.09384436905384... |
alfredolozano/CODEX_LoRA | alfredolozano | 2023-11-29T15:50:29Z | 3 | 0 | null | [
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | 2023-11-29T15:50:29Z | 2023-11-24T21:12:18.000Z | null | null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: <class 'str'>
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - alfredolozano/CODEX_LoRA
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on <class 'str'> using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | alfredolozano/CODEX_LoRA | [
-0.12533490359783173,
-0.51091468334198,
0.32301610708236694,
0.17400360107421875,
-0.45780912041664124,
0.41683438420295715,
0.4666728079319,
-0.19963763654232025,
0.703310489654541,
0.566622793674469,
-0.5887060761451721,
-0.47597989439964294,
-0.6709114909172058,
-0.14751245081424713,
... |
narraticlabs/social-clf | narraticlabs | 2023-11-29T10:58:55Z | 3 | 0 | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T10:58:55Z | 2023-11-25T16:51:30.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | narraticlabs/social-clf | [
-0.32276463508605957,
-0.22568437457084656,
0.8622260093688965,
0.43461504578590393,
-0.5282986760139465,
0.7012966275215149,
0.7915719747543335,
0.07618647813796997,
0.7746024131774902,
0.2563219368457794,
-0.7852815389633179,
-0.22573824226856232,
-0.910447895526886,
0.5715669393539429,
... |
dieumerci/mountain-recognition-ner | dieumerci | 2023-11-29T20:49:43Z | 3 | 0 | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T20:49:43Z | 2023-11-25T19:33:47.000Z | null | null | # Mountain Name Recognition (NER) Model
## Overview
This repository contains a pre-trained Named Entity Recognition (NER) model for mountain name recognition. The model is based on the `dslim/bert-large-NER` architecture and was fine-tuned on a relabeled subset of the [DFKI-SLT/few-nerd dataset](https://github.com/DFKI-SLT/few-nerd). The model achieves an F1 Score of 87.42% on the test set.
## Inference
Here's a sample code for performing inference using the model:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
model = AutoModelForTokenClassification.from_pretrained("dieumerci/mountain-recognition-ner")
tokenizer = AutoTokenizer.from_pretrained("dieumerci/mountain-recognition-ner")
classifier = pipeline("ner", model=model, tokenizer=tokenizer)
text = "Next on our list is Denali Peak, also known as Mount McKinley, in Alaska."
result = classifier(text)
print(result)
| null | transformers | token-classification | null | null | null | null | null | null | null | null | null | dieumerci/mountain-recognition-ner | [
-0.44375431537628174,
-0.5523371696472168,
0.17333894968032837,
-0.059877775609493256,
-0.3694521188735962,
-0.10308351367712021,
0.1235203742980957,
-0.1456800103187561,
0.2443000227212906,
0.3852440416812897,
-0.663625180721283,
-0.5517242550849915,
-0.9155896306037903,
0.066030614078044... |
KarlGauss/bert-base-italian-xxl-cased-finetuned-paisa | KarlGauss | 2023-11-29T17:01:30Z | 3 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-generation",
"generated_from_trainer",
"base_model:dbmdz/bert-base-italian-xxl-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:01:30Z | 2023-11-26T20:37:24.000Z | null | null | ---
license: mit
base_model: dbmdz/bert-base-italian-xxl-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-italian-xxl-cased-finetuned-paisa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-italian-xxl-cased-finetuned-paisa
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0 | 1.0 | 63471 | 0.0000 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | KarlGauss/bert-base-italian-xxl-cased-finetuned-paisa | [
-0.5999939441680908,
-0.7174893021583557,
0.17882491648197174,
0.23684661090373993,
-0.4656686782836914,
-0.6088526844978333,
-0.2810913026332855,
-0.23863254487514496,
0.2711648941040039,
0.4657701551914215,
-0.8701969981193542,
-0.725706934928894,
-0.5698373317718506,
-0.2184243500232696... |
HarshaSingamshetty1/detr-resnet-50_finetuned_cppe5 | HarshaSingamshetty1 | 2023-11-29T06:04:23Z | 3 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T06:04:23Z | 2023-11-27T05:54:46.000Z | null | null | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | object-detection | null | null | null | null | null | null | null | null | null | HarshaSingamshetty1/detr-resnet-50_finetuned_cppe5 | [
-0.5536611080169678,
-0.5743716955184937,
0.03172170743346214,
0.17145706713199615,
-0.32718339562416077,
-0.3281823396682739,
-0.17604516446590424,
-0.30823859572410583,
0.24667976796627045,
0.3207131028175354,
-0.9257379174232483,
-0.40304362773895264,
-0.5001497864723206,
0.211130738258... |
Ransaka/whisper-tiny-sinhala-20k | Ransaka | 2023-11-29T06:50:53Z | 3 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:Ransaka/SinhalaASR",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2023-11-29T06:50:53Z | 2023-11-27T14:48:55.000Z | null | null | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- Ransaka/SinhalaASR
metrics:
- wer
model-index:
- name: whisper-tiny-sinhala-20k
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: sinhala_asr
type: sinhala_asr
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 92.99603723159156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-sinhala-20k
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the sinhala_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2433
- Wer: 92.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4207 | 0.4 | 1000 | 0.3978 | 221.9058 |
| 0.2966 | 0.8 | 2000 | 0.3009 | 136.3423 |
| 0.226 | 1.2 | 3000 | 0.2661 | 97.6638 |
| 0.2224 | 1.6 | 4000 | 0.2510 | 92.3279 |
| 0.2034 | 2.0 | 5000 | 0.2433 | 92.9960 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0 | null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Ransaka/whisper-tiny-sinhala-20k | [
-0.3726228177547455,
-0.4931570589542389,
-0.07573724538087845,
0.08650514483451843,
-0.37291061878204346,
-0.47079360485076904,
-0.30771616101264954,
-0.36595675349235535,
0.20433539152145386,
0.2974624037742615,
-0.7141334414482117,
-0.4439946413040161,
-0.6334779262542725,
-0.1935423165... |
buscaholding/buscacerveja-beer | buscaholding | 2023-11-30T01:19:31Z | 3 | 0 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-30T01:19:31Z | 2023-11-28T00:47:47.000Z | null | null | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: buscacerveja-beer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# buscacerveja-beer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7421
- Accuracy: 0.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.94 | 8 | 3.9319 | 0.0870 |
| No log | 2.0 | 17 | 3.7933 | 0.0870 |
| No log | 2.82 | 24 | 3.7421 | 0.0870 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0.dev20230621+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | buscaholding/buscacerveja-beer | [
-0.5236729383468628,
-0.7184137105941772,
0.16060002148151398,
0.33015626668930054,
-0.3930637538433075,
-0.4400760531425476,
-0.2211870700120926,
-0.26810339093208313,
0.12659546732902527,
0.3635634779930115,
-0.776734471321106,
-0.6585050821304321,
-0.6476738452911377,
-0.311041414737701... |
mat27/medmnistPrueba | mat27 | 2023-11-29T13:10:49Z | 3 | 0 | null | [
"keras",
"tensorflow",
"medmnist",
"region:us"
] | 2023-11-29T13:10:49Z | 2023-11-28T14:37:31.000Z | null | null | ---
library_name: keras
tags:
- tensorflow
- keras
- medmnist
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| null | keras | null | null | null | null | null | null | null | null | null | null | mat27/medmnistPrueba | [
-0.5795427560806274,
-0.6300151944160461,
0.437518447637558,
0.08954446762800217,
-0.5213861465454102,
-0.248442605137825,
0.02036384493112564,
-0.006666964385658503,
0.3586036264896393,
0.3324301838874817,
-0.6915732026100159,
-0.7539809942245483,
-0.531462550163269,
0.0639881044626236,
... |
Jungwonchang/whisper_medium.en-Full-SPGIspeech-xs | Jungwonchang | 2023-11-29T06:53:31Z | 3 | 0 | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:Jungwonchang/spgispeech_xs",
"base_model:openai/whisper-medium.en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2023-11-29T06:53:31Z | 2023-11-28T14:38:33.000Z | null | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Jungwonchang/spgispeech_xs
base_model: openai/whisper-medium.en
model-index:
- name: openai/whisper-medium.en, all the parameters updated for 5 epochs
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Test set for spgispeech
type: kensho/spgispeech
config: test
split: test
metrics:
- type: wer
value: 6.67
name: WER
- type: cer
value: 1.98
name: CER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium.en, all the parameters updated for 5 epochs
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the 2 hour dataset of SPGIspeech(custom dataset) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.0
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Jungwonchang/whisper_medium.en-Full-SPGIspeech-xs | [
-0.40213510394096375,
-0.7420747876167297,
0.09970064461231232,
0.33206790685653687,
-0.4893990755081177,
-0.6789307594299316,
-0.38768863677978516,
-0.5111156105995178,
0.20951291918754578,
0.3211204409599304,
-0.6933571100234985,
-0.44204068183898926,
-0.6429167985916138,
-0.153919219970... |
Sagicc/whisper-small-sr-jv | Sagicc | 2023-11-29T17:31:21Z | 3 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sr",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:31:21Z | 2023-11-28T21:00:08.000Z | null | null | ---
language:
- sr
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Sr JV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sr JV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Juzne Vesti dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8988
- Wer Ortho: 0.4591
- Wer: 0.3415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.5956 | 0.74 | 500 | 0.8063 | 0.4794 | 0.3674 |
| 0.4401 | 1.48 | 1000 | 0.7975 | 0.4574 | 0.3423 |
| 0.3029 | 2.22 | 1500 | 0.7821 | 0.4512 | 0.3392 |
| 0.3016 | 2.96 | 2000 | 0.7828 | 0.4497 | 0.3318 |
| 0.2372 | 3.7 | 2500 | 0.8254 | 0.4503 | 0.3335 |
| 0.1762 | 4.44 | 3000 | 0.8402 | 0.4505 | 0.3381 |
| 0.1414 | 5.18 | 3500 | 0.8945 | 0.4584 | 0.3418 |
| 0.1326 | 5.92 | 4000 | 0.8988 | 0.4591 | 0.3415 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Sagicc/whisper-small-sr-jv | [
-0.4509182870388031,
-0.5284640192985535,
0.08106904476881027,
-0.041901011019945145,
-0.29720163345336914,
-0.56578129529953,
-0.25793150067329407,
-0.2269902229309082,
0.2015814334154129,
0.29410669207572937,
-0.8188802003860474,
-0.6380171775817871,
-0.649052083492279,
-0.30098930001258... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.