modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S1X3L4/a2c-PandaReachDense-v2 | 2023-07-24T18:29:45.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | S1X3L4 | null | null | S1X3L4/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-24T18:26:40 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.52 +/- 0.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019775390625,
-0.0474853515625,
-0.004791259765625,
0.0469970703125,
-0.00016677379608154297,
-0.006023406982421875,
0.033203125,
-0.0249786376953125,
0.028045654296875,
0.04266357421875,
-0.0625,
-0.0289764404296875,
-0.03277587890625,
-0.006629943847656... |
gwlms/bert-base-token-dropping-dewiki-v1 | 2023-09-06T22:12:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"de",
"dataset:gwlms/dewiki-20230701-chunks",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | gwlms | null | null | gwlms/bert-base-token-dropping-dewiki-v1 | 0 | 2 | transformers | 2023-07-24T20:33:37 | ---
license: mit
datasets:
- gwlms/dewiki-20230701-chunks
language:
- de
---
# German Wikipedia LMs
Coming soon!
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️ | 302 | [
[
-0.027435302734375,
-0.03253173828125,
0.049713134765625,
0.0194549560546875,
-0.023406982421875,
0.0068359375,
-0.0037517547607421875,
-0.03326416015625,
0.0418701171875,
0.0311431884765625,
-0.0552978515625,
-0.049835205078125,
-0.029693603515625,
-0.00261... |
snicolau/a2c-AntBulletEnv-v0 | 2023-07-24T20:37:28.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | snicolau | null | null | snicolau/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-24T20:36:17 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1662.96 +/- 314.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.044403076171875,
0.01070404052734375,
0.0208892822265625,
-0.0035266876220703125,
0.001781463623046875,
0.0187530517578125,
-0.0176239013671875,
0.0193939208984375,
0.0265655517578125,
-0.052581787109375,
-0.037506103515625,
-0.044250488281... |
snicolau/a2c-PandaReachDense-v2 | 2023-07-24T21:30:26.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | snicolau | null | null | snicolau/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-24T21:27:42 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.93 +/- 0.43
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
YarramsettiNaresh/a2c-PandaReachDense-v2 | 2023-07-25T03:52:07.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | YarramsettiNaresh | null | null | YarramsettiNaresh/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T03:49:01 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.28 +/- 0.91
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
jpvlinhares/ppo-LunarLander-v2 | 2023-07-25T04:17:01.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | jpvlinhares | null | null | jpvlinhares/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T04:16:38 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.27 +/- 23.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
viethoangtranduong/v1-7b-llm-v2-e10 | 2023-07-25T07:44:03.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | viethoangtranduong | null | null | viethoangtranduong/v1-7b-llm-v2-e10 | 0 | 2 | transformers | 2023-07-25T06:52:35 | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
---
# Model Trained Using AutoTrain | 120 | [
[
-0.002300262451171875,
0.01140594482421875,
0.00653839111328125,
0.01319122314453125,
-0.0217437744140625,
0.0012025833129882812,
0.0394287109375,
-0.0081634521484375,
-0.0173187255859375,
0.01898193359375,
-0.03948974609375,
0.01512908935546875,
-0.044982910156... |
Littlelearner001/PP0-LunarLander-V2 | 2023-07-25T07:05:45.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Littlelearner001 | null | null | Littlelearner001/PP0-LunarLander-V2 | 0 | 2 | stable-baselines3 | 2023-07-25T07:05:24 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.43 +/- 39.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
msani/ppo-lunarlander-v2 | 2023-07-25T09:30:33.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | msani | null | null | msani/ppo-lunarlander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T09:30:07 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -162.36 +/- 20.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 785 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
YarramsettiNaresh/ppo-LunarLander-v2-1 | 2023-07-25T09:39:54.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | YarramsettiNaresh | null | null | YarramsettiNaresh/ppo-LunarLander-v2-1 | 0 | 2 | stable-baselines3 | 2023-07-25T09:39:36 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.33 +/- 14.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
mojtabak/ppo-LunarLander-v2 | 2023-07-25T14:38:13.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mojtabak | null | null | mojtabak/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T09:58:30 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.76 +/- 32.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
kai824/FirstAssignmentTest | 2023-07-25T11:12:46.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kai824 | null | null | kai824/FirstAssignmentTest | 0 | 2 | stable-baselines3 | 2023-07-25T11:12:25 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.59 +/- 17.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML | 2023-09-27T13:00:27.000Z | [
"transformers",
"llama",
"sft",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:shahules786/orca-chat",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:atom-in-the-universe/fanfics-10k-50k",
"arxiv:2306.02707",
"license:llama2",
"text-generation-inference",
"region:us"
... | text-generation | TheBloke | null | null | TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML | 51 | 2 | transformers | 2023-07-25T11:47:23 | ---
language:
- en
license: llama2
tags:
- sft
datasets:
- ehartford/dolphin
- shahules786/orca-chat
- togethercomputer/RedPajama-Data-1T
- atom-in-the-universe/fanfics-10k-50k
model_name: Llama2 13B Orca 8K 3319
inference: false
model_creator: OpenAssistant
model_link: https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
widget:
- text: <|system|>You are an AI assistant. You will be given a task. You must generate
a detailed and long answer.</s><|prompter|>What is a meme, and what's the history
behind this word?</s><|assistant|>
- text: <|system|>You are an AI assistant that helps people find information.</s><|prompter|>What's
the Earth total population</s><|assistant|>
- text: <|system|>You are an AI assistant that follows instruction extremely well.
Help as much as you can.</s><|prompter|>Write a story about future of AI development</s><|assistant|>
base_model: OpenAssistant/llama2-13b-orca-8k-3319
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 13B Orca 8K 3319 - GGML
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)
## Description
This repo contains GGML format model files for [OpenAssistant's Llama2 13B Orca 8K 3319](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML)
* [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-13b-orca-8k-3319)
## Prompt template: OpenAssistant-System
```
<|system|>{system_message}</s><|prompter|>{prompt}</s><|assistant|>
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.15 GB| 11.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [openassistant-llama2-13b-orca-8k-3319.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML/blob/main/openassistant-llama2-13b-orca-8k-3319.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m openassistant-llama2-13b-orca-8k-3319.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|>You are a story writing assistant.</s><|prompter|>Write a story about llamas</s><|assistant|>"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: OpenAssistant's Llama2 13B Orca 8K 3319
# llama2-13b-orca-8k-3319
## Model Description
This model is a fine-tuning of Meta's Llama2 13B model with 8K context size on a long-conversation variant of the Dolphin dataset ([orca-chat](https://huggingface.co/datasets/shahules786/orca-chat)).
Note: **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!**
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
user_prompt = "Write me a poem please"
prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Model Details
- base model: [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b)
- License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
- sampling report: [2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-07-25_OpenAssistant_llama2-13b-orca-8k-3319_sampling_llama2_prompt.json)
- wandb: [public-sft/runs/2jfazjt9](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
- checkpoint: 3319 steps
- datatpye: fp16
- sponsored by: [Redmond.ai](https://redmond.ai/)
## Long context (RoPE Scaling)
This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently
added to [Huggingface transformers](https://github.com/huggingface/transformers/). Before loading this model please make sure
HF transformers >=4.31.0 is installed (`pip install transformers>=4.31.0`).
## Conversation Template
For the initial response use (e.g. the [llama2 default system prompt](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well):
```
<|system|>system message</s><|prompter|>user prompt</s><|assistant|>
```
For multi-turn conversations use:
```
<|system|>system message</s><|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|>
```
The model was trained with the following 15 system messages used to generate the training examples (see [ORCA paper](https://arxiv.org/abs/2306.02707)):
1. You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.
2. You are an AI assistant. You will be given a task. You must generate a detailed and long answer.
3. You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.
4. You are an AI assistant that follows instruction extremely well. Help as much as you can.
5. You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.
6. You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
7. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.
8. Explain how you used the definition to come up with the answer.
9. You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.
10. You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer.
11. User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.
12. You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.
13. You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.
14. Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part \#: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.
15. You are an AI assistant that helps people find information.
## Datasets: Orca-Chat/Dolphin, RedPajama1T & FanFics
This model was trained on:
- [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat)
- [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- [atom-in-the-universe/fanfics-10k-50k](https://huggingface.co/datasets/atom-in-the-universe/fanfics-10k-50k)
```
Dataset Composition:
Tain (sampled):
orca-chat: 188842 (100%)
fanfics: 47760 (100%)
red_pajama: 188262 (25%)
Valid:
orca-chat: 5000
fanfics: 1000
red_pajama: 1000
```
The dataset [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations
to improve long-context training.
Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size.
## Model Configuration
```
llama2_13b_orca_8k:
rng_seed: 0xe1291f1a
use_custom_sampler: true
sort_by_length: false
dtype: fp16
log_dir: "llama2_log_13b_orca_8k"
learning_rate: 1e-5
model_name: /mnt/data/llama2/Llama-2-13b-hf/
output_dir: llama2_13b_orca_8k
deepspeed_config: configs/zero_config_pretrain.json
weight_decay: 0.0
max_length: 8192
warmup_steps: 100
use_flash_attention: true
gradient_checkpointing: true
gradient_accumulation_steps: 8
per_device_train_batch_size: 2
per_device_eval_batch_size: 1
residual_dropout: 0.0
eval_steps: 200
save_steps: 1000 # (total steps: 3319)
num_train_epochs: 1
save_total_limit: 4
superhot: true
superhot_config:
type: linear
scale: 2
datasets:
- orca-chat:
max_val_set: 5000
- fanfics:
max_chunk_size: 65535
max_val_set: 1000
- red_pajama:
fraction: 0.25
max_val_set: 1000
max_chunk_size: 65535
peft_model: false
```
# Developers
- [shahules786](https://github.com/shahules786)
- [jordiclive](https://github.com/jordiclive)
- [andreaskoepf](https://github.com/andreaskoepf/)
# Special Thanks
We want to especially thank Eric Hartford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
Also, shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind!
The whole Open-Assistant team is very grateful for the continued support of [Redmond.ai](https://redmond.ai/) who sponsored the training compute required for this model.
# License
- Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
- Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the [Acceptable Use Policy](https://ai.meta.com/llama/use-policy) for the Llama Materials.
| 24,865 | [
[
-0.037933349609375,
-0.0618896484375,
0.023651123046875,
0.006641387939453125,
-0.0273895263671875,
-0.01061248779296875,
-0.0010595321655273438,
-0.05218505859375,
0.029571533203125,
0.00927734375,
-0.04241943359375,
-0.05120849609375,
-0.0355224609375,
0.0... |
Matej/bert-small-buddhist-nonbuddhist-sanskrit | 2023-07-25T12:26:33.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | Matej | null | null | Matej/bert-small-buddhist-nonbuddhist-sanskrit | 0 | 2 | transformers | 2023-07-25T12:05:43 | # bert-small-buddhist-nonbuddhist-sanskrit
BERT model trained on a lemmatized corpus containing Buddhist and non-Buddhist Sanskrit texts.
## Model description
The model has the bert architecture and was pretrained from scratch as a masked language model
on the lemmatized Sanskrit corpus. Due to lack of resources and to prevent overfitting, the model is smaller than bert-base,
i.e. the number of attention heads and hidden layers have been reduced to 8 and the context has been reduced to 128 tokens. Vocabulary size is 10000 tokens.
## How to use it
```
model = AutoModelForMaskedLM.from_pretrained("Matej/bert-small-buddhist-nonbuddhist-sanskrit")
tokenizer = AutoTokenizer.from_pretrained("Matej/bert-small-buddhist-nonbuddhist-sanskrit", use_fast=True)
```
## Intended uses & limitations
MIT license, no limitations
## Training and evaluation data
See the paper 'Embeddings models for Buddhist Sanskrit' for details on the corpora and the evaluation procedure.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Framework versions
- Transformers 4.20.0
- Pytorch 1.9.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,352 | [
[
-0.0137786865234375,
-0.05267333984375,
-0.00009673833847045898,
0.0282135009765625,
-0.0526123046875,
-0.0209503173828125,
-0.034759521484375,
-0.017669677734375,
0.0301971435546875,
-0.00437164306640625,
-0.03521728515625,
-0.0305328369140625,
-0.044677734375,... |
youngsun05/bert-finetuned-squad | 2023-10-16T01:00:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | youngsun05 | null | null | youngsun05/bert-finetuned-squad | 0 | 2 | transformers | 2023-07-25T14:10:35 | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,057 | [
[
-0.043121337890625,
-0.053863525390625,
0.0062255859375,
0.01849365234375,
-0.0263519287109375,
-0.019073486328125,
-0.0108489990234375,
-0.0178070068359375,
0.0163421630859375,
0.027801513671875,
-0.0748291015625,
-0.03424072265625,
-0.0343017578125,
-0.006... |
magnustragardh/a2c-AntBulletEnv-v0 | 2023-07-25T14:19:16.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | magnustragardh | null | null | magnustragardh/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-25T14:18:18 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1515.62 +/- 65.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
Adi0010/a2c-PandaReachDense-v2 | 2023-07-25T16:28:24.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Adi0010 | null | null | Adi0010/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T15:49:38 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.84 +/- 0.41
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
TitanML/ct2-int8-mt0-xl | 2023-07-25T18:11:08.000Z | [
"transformers",
"mt5",
"text2text-generation",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn... | text2text-generation | TitanML | null | null | TitanML/ct2-int8-mt0-xl | 0 | 2 | transformers | 2023-07-25T17:43:37 | ---
datasets:
- bigscience/xP3
- mc4
license: apache-2.0
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
pipeline_tag: text2text-generation
widget:
- text: >-
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
review as positive, neutral or negative?
example_title: zh-en sentiment
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
example_title: zh-zh sentiment
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
example_title: vi-en query
- text: >-
Proposez au moins cinq mots clés concernant «Réseau de neurones
artificiels».
example_title: fr-fr query
- text: Explain in a sentence in Telugu what is backpropagation in neural networks.
example_title: te-en qa
- text: Why is the sky blue?
example_title: en-en qa
- text: >-
Write a fairy tale about a troll saving a princess from a dangerous dragon.
The fairy tale is a masterpiece that has achieved praise worldwide and its
moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
example_title: es-en fable
- text: >-
Write a fable about wood elves living in a forest that is suddenly invaded
by ogres. The fable is a masterpiece that has achieved praise worldwide and
its moral is "Violence is the last refuge of the incompetent". Fable (in
Hindi):
example_title: hi-en fable
model-index:
- name: mt0-xl
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 52.49
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 61.89
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.04
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 60.27
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 66.16
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.05
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 62.9
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 38.2
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 34.8
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 39
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 85.71
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 78.7
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 51.85
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.18
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.83
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.22
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.24
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.09
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.6
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.13
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.56
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.91
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.21
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.64
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 0
- type: Pass@10
value: 0
- type: Pass@100
value: 0
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: '2016'
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 79.1
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 72
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 66
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 71
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 53
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 60
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 68
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 65
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 70.09
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 77.17
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 69.03
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.08
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 75.71
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 65.65
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 74.85
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.14
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 68.89
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 72.93
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xl"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [mt5-xl](https://huggingface.co/google/mt5-xl), also refer to the `config.json` file
- **Finetuning steps:** 10000
- **Finetuning tokens:** 1.85 billion
- **Precision:** bfloat16
## Hardware
- **TPUs:** TPUv4-128
## Software
- **Orchestration:** [T5X](https://github.com/google-research/t5x)
- **Neural networks:** [Jax](https://github.com/google/jax)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` | 23,774 | [
[
-0.030731201171875,
-0.041259765625,
0.023345947265625,
0.0275421142578125,
-0.00763702392578125,
-0.004970550537109375,
-0.02386474609375,
-0.0251617431640625,
0.029449462890625,
-0.0108184814453125,
-0.0677490234375,
-0.039398193359375,
-0.040802001953125,
... |
Ahmed007/GPT2-Arabic_Poetry_generator | 2023-08-01T22:28:39.000Z | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Ahmed007 | null | null | Ahmed007/GPT2-Arabic_Poetry_generator | 1 | 2 | transformers | 2023-07-25T17:58:29 | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: GPT2-Arabic_Poetry_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-Arabic_Poetry_generator
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,131 | [
[
-0.0254058837890625,
-0.04461669921875,
0.0180206298828125,
0.0128631591796875,
-0.039276123046875,
-0.027069091796875,
-0.01273345947265625,
-0.0275421142578125,
-0.0041656494140625,
0.022430419921875,
-0.0418701171875,
-0.047637939453125,
-0.0621337890625,
... |
Adi0010/ppo-sb-LunarLander-v2 | 2023-07-25T18:12:30.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Adi0010 | null | null | Adi0010/ppo-sb-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T18:12:09 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.64 +/- 20.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
TitanML/ct2-int8-mt5-xl | 2023-07-25T19:29:47.000Z | [
"transformers",
"mt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"ha... | text2text-generation | TitanML | null | null | TitanML/ct2-int8-mt5-xl | 1 | 2 | transformers | 2023-07-25T19:25:49 | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. | 2,827 | [
[
-0.0369873046875,
-0.01194000244140625,
0.0203704833984375,
0.0287933349609375,
-0.020599365234375,
0.0251922607421875,
-0.02679443359375,
-0.031341552734375,
0.01204681396484375,
0.0252838134765625,
-0.04913330078125,
-0.0599365234375,
-0.06512451171875,
0.... |
Nerozud/ppo-LunarLander-v2 | 2023-07-25T19:45:00.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Nerozud | null | null | Nerozud/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T19:44:40 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.79 +/- 18.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0002123117446899414,
-0.027130126953125,
0.017059326171875,
0.0233612060546875,
-0.00606536865234375,
0.0027256011962890625,
0.034454345703125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.0352783203125,
-0.0343017578125,
... |
ChairWorm/ppo-LunarLander-v2 | 2023-07-25T20:05:02.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | ChairWorm | null | null | ChairWorm/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-25T20:04:39 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 200.62 +/- 71.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0002123117446899414,
-0.027130126953125,
0.017059326171875,
0.0233612060546875,
-0.00606536865234375,
0.0027256011962890625,
0.034454345703125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.0352783203125,
-0.0343017578125,
... |
giniwini/ppo-PyramidsRND | 2023-07-25T20:28:13.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | giniwini | null | null | giniwini/ppo-PyramidsRND | 0 | 2 | ml-agents | 2023-07-25T20:28:05 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giniwini/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,336 | [
[
-0.040191650390625,
-0.03411865234375,
0.001827239990234375,
0.01380157470703125,
-0.0100250244140625,
0.01233673095703125,
0.016998291015625,
-0.015350341796875,
0.033782958984375,
0.028900146484375,
-0.041595458984375,
-0.04949951171875,
-0.02899169921875,
... |
annazhong/vit-base-patch16-224-finetuned-feature-maps-v3 | 2023-07-26T22:01:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | annazhong | null | null | annazhong/vit-base-patch16-224-finetuned-feature-maps-v3 | 0 | 2 | transformers | 2023-07-25T21:33:03 | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-feature-maps-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-feature-maps-v3
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0989
- Accuracy: 0.3810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 150
- eval_batch_size: 150
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 600
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.0989 | 0.3810 |
| No log | 2.0 | 2 | 1.1292 | 0.3651 |
| No log | 3.0 | 3 | 1.0972 | 0.3810 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,668 | [
[
-0.038421630859375,
-0.049285888671875,
0.00982666015625,
0.0113525390625,
-0.025482177734375,
-0.03570556640625,
-0.019561767578125,
-0.01369476318359375,
0.00905609130859375,
0.0343017578125,
-0.05078125,
-0.049713134765625,
-0.051025390625,
-0.02310180664... |
cehenderson/llama2-qlora-finetuned-french | 2023-07-26T01:23:29.000Z | [
"peft",
"question-answering",
"region:us"
] | question-answering | cehenderson | null | null | cehenderson/llama2-qlora-finetuned-french | 1 | 2 | peft | 2023-07-25T22:48:30 | ---
library_name: peft
pipeline_tag: question-answering
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0 | 472 | [
[
-0.0479736328125,
-0.050048828125,
0.0302276611328125,
0.034454345703125,
-0.03924560546875,
0.00765228271484375,
0.0130767822265625,
-0.01396942138671875,
-0.01152801513671875,
0.03192138671875,
-0.0428466796875,
-0.0084228515625,
-0.03460693359375,
0.01276... |
eikoenchine/unit1 | 2023-07-25T23:41:49.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | eikoenchine | null | null | eikoenchine/unit1 | 0 | 2 | stable-baselines3 | 2023-07-25T23:33:43 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -615.69 +/- 470.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 786 | [
[
-0.0002123117446899414,
-0.027130126953125,
0.017059326171875,
0.0233612060546875,
-0.00606536865234375,
0.0027256011962890625,
0.034454345703125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.0352783203125,
-0.0343017578125,
... |
eikoenchine/ppo-LunarLander-v2 | 2023-07-26T00:44:53.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | eikoenchine | null | null | eikoenchine/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T00:32:30 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.85 +/- 18.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0002123117446899414,
-0.027130126953125,
0.017059326171875,
0.0233612060546875,
-0.00606536865234375,
0.0027256011962890625,
0.034454345703125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.0352783203125,
-0.0343017578125,
... |
draziert/a2c-PandaReachDense-v2 | 2023-07-26T03:46:47.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | draziert | null | null | draziert/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T03:44:09 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.92 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.0474853515625,
-0.004791259765625,
0.04705810546875,
-0.00014722347259521484,
-0.00600433349609375,
0.03326416015625,
-0.0249786376953125,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625,
... |
EchoBomber/ppo-MountainCar-v0 | 2023-07-26T04:40:18.000Z | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | EchoBomber | null | null | EchoBomber/ppo-MountainCar-v0 | 0 | 2 | stable-baselines3 | 2023-07-26T04:39:59 | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0146331787109375,
-0.0291290283203125,
0.0085296630859375,
0.0240936279296875,
-0.01666259765625,
-0.004154205322265625,
0.0262451171875,
-0.0173797607421875,
0.0137786865234375,
0.051361083984375,
-0.05126953125,
-0.0352783203125,
-0.0301055908203125,
-0... |
xianbin/ppo-LunarLander-v2 | 2023-07-26T05:51:59.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | xianbin | null | null | xianbin/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T05:10:59 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.14 +/- 25.23
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
Dumar/ppo-LunarLander-Ahmed | 2023-07-26T07:06:43.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Dumar | null | null | Dumar/ppo-LunarLander-Ahmed | 0 | 2 | stable-baselines3 | 2023-07-26T07:06:24 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.41 +/- 19.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
Samalabama66/PyramidsTraining | 2023-07-26T07:30:39.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Samalabama66 | null | null | Samalabama66/PyramidsTraining | 0 | 2 | ml-agents | 2023-07-26T07:30:37 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Samalabama66/PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,341 | [
[
-0.0406494140625,
-0.0350341796875,
0.001300811767578125,
0.0144805908203125,
-0.01007843017578125,
0.012359619140625,
0.0169830322265625,
-0.01488494873046875,
0.033477783203125,
0.0301361083984375,
-0.042144775390625,
-0.0501708984375,
-0.02960205078125,
-... |
YojitShinde/ppo-LunarLander | 2023-07-26T07:32:30.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | YojitShinde | null | null | YojitShinde/ppo-LunarLander | 0 | 2 | stable-baselines3 | 2023-07-26T07:32:10 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.90 +/- 16.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
Samalabama66/a2c-AntBulletEnv | 2023-07-26T08:48:09.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Samalabama66 | null | null | Samalabama66/a2c-AntBulletEnv | 0 | 2 | stable-baselines3 | 2023-07-26T08:47:06 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1234.42 +/- 188.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.044403076171875,
0.01070404052734375,
0.0208892822265625,
-0.0035266876220703125,
0.001781463623046875,
0.0187530517578125,
-0.0176239013671875,
0.0193939208984375,
0.0265655517578125,
-0.052581787109375,
-0.037506103515625,
-0.044250488281... |
justinhoang/a2c-PandaReachDense-v2 | 2023-07-26T09:04:24.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | justinhoang | null | null | justinhoang/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T09:02:36 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.50 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
hfhz/ppo-lunav2 | 2023-07-26T09:34:33.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | hfhz | null | null | hfhz/ppo-lunav2 | 0 | 2 | stable-baselines3 | 2023-07-26T09:34:14 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.05 +/- 75.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0001957416534423828,
-0.0271148681640625,
0.017059326171875,
0.023345947265625,
-0.006061553955078125,
0.002750396728515625,
0.034454345703125,
-0.01210784912109375,
0.0198516845703125,
0.06494140625,
-0.04315185546875,
-0.035247802734375,
-0.0343017578125,
... |
Samalabama66/a2c-PandaReachDense-v2 | 2023-07-27T08:52:24.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Samalabama66 | null | null | Samalabama66/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T09:41:51 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.58 +/- 0.16
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.0474853515625,
-0.004791259765625,
0.047027587890625,
-0.00016224384307861328,
-0.006038665771484375,
0.033203125,
-0.02496337890625,
0.028076171875,
0.042694091796875,
-0.06256103515625,
-0.02899169921875,
-0.03277587890625,
-0.006637... |
thliang01/LunarLander-v2 | 2023-07-26T10:05:44.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | thliang01 | null | null | thliang01/LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T10:05:24 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.85 +/- 13.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
GOATsan/ppo-LunarLander-v2 | 2023-07-26T11:12:32.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | GOATsan | null | null | GOATsan/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T11:12:15 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.88 +/- 13.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.0001957416534423828,
-0.0271148681640625,
0.017059326171875,
0.023345947265625,
-0.006061553955078125,
0.002750396728515625,
0.034454345703125,
-0.01210784912109375,
0.0198516845703125,
0.06494140625,
-0.04315185546875,
-0.035247802734375,
-0.0343017578125,
... |
dontgive99/mdeberta-v3-kor-further-ner | 2023-08-01T01:29:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | dontgive99 | null | null | dontgive99/mdeberta-v3-kor-further-ner | 0 | 2 | transformers | 2023-07-26T12:11:34 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mdeberta-v3-kor-further-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-kor-further-ner
This model is a fine-tuned version of [lighthouse/mdeberta-v3-base-kor-further](https://huggingface.co/lighthouse/mdeberta-v3-base-kor-further) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0261
- Precision: 0.9436
- Recall: 0.9378
- F1: 0.9407
- Accuracy: 0.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 357 | 0.0490 | 0.8634 | 0.8914 | 0.8772 | 0.9887 |
| 0.2165 | 2.0 | 714 | 0.0296 | 0.9276 | 0.9308 | 0.9292 | 0.9937 |
| 0.0314 | 3.0 | 1071 | 0.0281 | 0.9275 | 0.9299 | 0.9287 | 0.9937 |
| 0.0314 | 4.0 | 1428 | 0.0261 | 0.9436 | 0.9378 | 0.9407 | 0.9947 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.03582763671875,
-0.035919189453125,
0.0233917236328125,
0.0130767822265625,
-0.0237274169921875,
-0.02874755859375,
-0.0013647079467773438,
-0.0186309814453125,
0.0203399658203125,
0.027435302734375,
-0.049407958984375,
-0.051361083984375,
-0.04595947265625,
... |
mgigena/roberta-large-cuad | 2023-07-26T12:48:33.000Z | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | mgigena | null | null | mgigena/roberta-large-cuad | 0 | 2 | transformers | 2023-07-26T12:38:58 | ---
language:
- en
license: cc-by-4.0
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-large-cuad
# Model Details
## Model Description
- **Developed by:** Hendrycks et al.
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [TheAtticusProject](https://github.com/TheAtticusProject/cuad)
- Associated Paper: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- Project website: [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data, Procedure, Preprocessing, etc.
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("mgigena/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("mgigena/roberta-large-cuad")
```
</details> | 2,094 | [
[
-0.0228729248046875,
-0.061126708984375,
0.03680419921875,
-0.00615692138671875,
-0.021209716796875,
-0.006855010986328125,
-0.0089874267578125,
-0.0212860107421875,
0.010162353515625,
0.048004150390625,
-0.01715087890625,
-0.048095703125,
-0.049896240234375,
... |
petergriger/ppo_lunar_lander | 2023-07-26T14:10:46.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | petergriger | null | null | petergriger/ppo_lunar_lander | 0 | 2 | stable-baselines3 | 2023-07-26T14:10:33 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.52 +/- 19.37
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
faade/ppo-LunarLander-v2 | 2023-07-26T14:42:15.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | faade | null | null | faade/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T14:41:56 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.89 +/- 18.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
neuralsentry/vulnfixClassification-StarEncoder-DCM-Balanced | 2023-07-26T15:43:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | neuralsentry | null | null | neuralsentry/vulnfixClassification-StarEncoder-DCM-Balanced | 0 | 2 | transformers | 2023-07-26T15:39:24 | ---
base_model: neuralsentry/starencoder-git-commits-mlm
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: starencoder-vulnfix-classification-balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starencoder-vulnfix-classification-balanced
This model is a fine-tuned version of [neuralsentry/starencoder-git-commits-mlm](https://huggingface.co/neuralsentry/starencoder-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Accuracy: 0.9259
- Precision: 0.9487
- Recall: 0.9391
- F1: 0.9438
- Roc Auc: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3612 | 1.0 | 42 | 0.2358 | 0.9057 | 0.9691 | 0.8860 | 0.9257 | 0.9152 |
| 0.1616 | 2.0 | 84 | 0.1950 | 0.9296 | 0.9490 | 0.9447 | 0.9468 | 0.9223 |
| 0.074 | 3.0 | 126 | 0.2253 | 0.9259 | 0.9487 | 0.9391 | 0.9438 | 0.9195 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,881 | [
[
-0.01409149169921875,
-0.040252685546875,
0.00604248046875,
0.0010013580322265625,
-0.00942230224609375,
-0.00925445556640625,
0.005786895751953125,
-0.0219879150390625,
0.0218658447265625,
0.033782958984375,
-0.045318603515625,
-0.050994873046875,
-0.0514221191... |
hugfacerhaha/a2c-AntBulletEnv-v0 | 2023-07-26T16:35:14.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | hugfacerhaha | null | null | hugfacerhaha/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-26T16:33:59 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1208.84 +/- 123.93
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
DarkAirforce/ppo-PyramidsRND | 2023-07-26T17:55:43.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | DarkAirforce | null | null | DarkAirforce/ppo-PyramidsRND | 0 | 2 | ml-agents | 2023-07-26T17:47:23 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DarkAirforce/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,340 | [
[
-0.039276123046875,
-0.034912109375,
0.0021457672119140625,
0.01340484619140625,
-0.0111541748046875,
0.01324462890625,
0.0176544189453125,
-0.016571044921875,
0.03363037109375,
0.0297393798828125,
-0.041229248046875,
-0.048126220703125,
-0.029388427734375,
... |
anth0nyhak1m/FPC_model | 2023-07-26T19:12:40.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | anth0nyhak1m | null | null | anth0nyhak1m/FPC_model | 0 | 2 | transformers | 2023-07-26T19:11:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FPC_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FPC_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4029
- Accuracy: 0.9153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 285 | 1.1683 | 0.7397 |
| 1.5827 | 2.0 | 570 | 0.6301 | 0.8481 |
| 1.5827 | 3.0 | 855 | 0.5046 | 0.8755 |
| 0.4453 | 4.0 | 1140 | 0.4156 | 0.8941 |
| 0.4453 | 5.0 | 1425 | 0.3790 | 0.9153 |
| 0.1964 | 6.0 | 1710 | 0.3949 | 0.9078 |
| 0.1964 | 7.0 | 1995 | 0.3969 | 0.9153 |
| 0.1072 | 8.0 | 2280 | 0.4002 | 0.9153 |
| 0.0611 | 9.0 | 2565 | 0.4027 | 0.9141 |
| 0.0611 | 10.0 | 2850 | 0.4029 | 0.9153 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,881 | [
[
-0.03131103515625,
-0.039825439453125,
0.01201629638671875,
0.01507568359375,
-0.0253143310546875,
-0.019866943359375,
-0.001720428466796875,
-0.0089263916015625,
0.006107330322265625,
0.01708984375,
-0.0498046875,
-0.0445556640625,
-0.052093505859375,
-0.01... |
Teapack1/model_KWS | 2023-07-26T19:55:00.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | Teapack1 | null | null | Teapack1/model_KWS | 0 | 2 | transformers | 2023-07-26T19:24:17 | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: model_KWS
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_KWS
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
- Accuracy: 0.9825
## Model description
Finetuned on custom commands: "ambient", "light", "off", "on", "scene1", "scene2", "scene3", "void"
## Intended uses & limitations
Intended for keyword spotting applications.
## Training and evaluation data
3200 training samples, 800 testing samples in total.
Originally was recorded 20 samples of every class.
Each sample was randomly augmented with random methods: pitch-shifting, time-stretching, volume-change, gaussian noise.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0119 | 1.0 | 25 | 1.9832 | 0.375 |
| 1.4505 | 2.0 | 50 | 1.3361 | 0.8337 |
| 1.0767 | 3.0 | 75 | 0.8700 | 0.955 |
| 0.7448 | 4.0 | 100 | 0.6919 | 0.9513 |
| 0.6143 | 5.0 | 125 | 0.5333 | 0.9625 |
| 0.4924 | 6.0 | 150 | 0.4387 | 0.98 |
| 0.4544 | 7.0 | 175 | 0.3844 | 0.985 |
| 0.3888 | 8.0 | 200 | 0.3668 | 0.9812 |
| 0.3734 | 9.0 | 225 | 0.3436 | 0.9825 |
| 0.3522 | 10.0 | 250 | 0.3346 | 0.9825 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
| 2,602 | [
[
-0.03497314453125,
-0.040618896484375,
0.0033245086669921875,
0.00493621826171875,
-0.01385498046875,
-0.018310546875,
-0.0090179443359375,
-0.0224761962890625,
0.01006317138671875,
0.02130126953125,
-0.06439208984375,
-0.0592041015625,
-0.048370361328125,
-... |
Krainez/unit1-hw | 2023-07-26T19:41:38.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Krainez | null | null | Krainez/unit1-hw | 0 | 2 | stable-baselines3 | 2023-07-26T19:35:41 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.33 +/- 21.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
milktruck/ppo-LunarLander-v2 | 2023-07-26T20:27:35.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | milktruck | null | null | milktruck/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T19:54:33 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.56 +/- 17.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00021982192993164062,
-0.0271453857421875,
0.0170745849609375,
0.0233306884765625,
-0.006072998046875,
0.002765655517578125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.034332275390625... |
digitaljungle/lander1 | 2023-07-26T20:08:28.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | digitaljungle | null | null | digitaljungle/lander1 | 0 | 2 | stable-baselines3 | 2023-07-26T20:08:09 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.27 +/- 76.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00021982192993164062,
-0.0271453857421875,
0.0170745849609375,
0.0233306884765625,
-0.006072998046875,
0.002765655517578125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.034332275390625... |
bk6000/ppo-Pyramid | 2023-07-26T20:27:08.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | bk6000 | null | null | bk6000/ppo-Pyramid | 0 | 2 | ml-agents | 2023-07-26T20:27:06 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bk6000/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,330 | [
[
-0.040252685546875,
-0.03350830078125,
0.002407073974609375,
0.0134124755859375,
-0.01078033447265625,
0.01253509521484375,
0.0179443359375,
-0.0149078369140625,
0.03265380859375,
0.030548095703125,
-0.040771484375,
-0.04913330078125,
-0.0293426513671875,
-0... |
Emperor-WS/ppo-PyramidsRND | 2023-07-26T20:33:28.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Emperor-WS | null | null | Emperor-WS/ppo-PyramidsRND | 0 | 2 | ml-agents | 2023-07-26T20:33:25 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Emperor-WS/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,338 | [
[
-0.0411376953125,
-0.0340576171875,
0.00212860107421875,
0.013336181640625,
-0.0110321044921875,
0.0126190185546875,
0.0163421630859375,
-0.01313018798828125,
0.033233642578125,
0.03057861328125,
-0.04058837890625,
-0.04949951171875,
-0.029510498046875,
-0.0... |
teilomillet/a2c-AntBulletEnv-v0 | 2023-07-26T20:41:24.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | teilomillet | null | null | teilomillet/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-26T20:40:15 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1210.59 +/- 73.36
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
lukelarue/ppo-LunarLander-v2 | 2023-07-26T22:57:45.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | lukelarue | null | null | lukelarue/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-26T22:57:25 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.53 +/- 16.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
zjunlp/llama-molinst-protein-7b | 2023-07-27T05:12:09.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"chemistry",
"biology",
"protein",
"instructions",
"arxiv:2306.08018",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | zjunlp | null | null | zjunlp/llama-molinst-protein-7b | 0 | 2 | transformers | 2023-07-27T01:26:46 | ---
license: apache-2.0
tags:
- chemistry
- biology
- protein
- instructions
---
This repo contains a fully fine-tuned [LLaMA-7b](https://huggingface.co/decapoda-research/llama-7b-hf), trained on the 🧬 **protein-oriented instructions** from the 🧪 [Mol-Instructions](https://huggingface.co/datasets/zjunlp/Mol-Instructions) dataset.
Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions.
> Please refer to our [paper](https://arxiv.org/abs/2306.08018) for more details.

<h3> 🧬 Tasks</h3>
<details>
<summary><b>Protein design</b></summary>
- *Design a protein sequence that exhibits the desired activity and specificity.*
1. The presence of Mg(2+) is necessary for the protein to function in the desired environment.
2. The AMP, (6S)-NADPHX binding site should be located in a region of the protein that is accessible to the ligand.
3. The designed protein should have ATP binding, NADPHX epimerase activity, metal ion binding, ADP-dependent NAD(P)H-hydrate dehydratase activity to facilitate nicotinamide nucleotide metabolic process.
4. For general function, the protein need meet that Catalyzes the epimerization of the S- and R-forms of NAD(P)HX, a damaged form of NAD(P)H that is a result of enzymatic or heat-dependent hydration
```
MSNELVLSREQVRRVDQRAIEAYGVPGIVLMENAGRGAAEIIRAACPSAQRVLIACGPGNNGGDGFVIARHLANAGWMVELLLACPADRITGDAQGNHEIIRRMNLPCAVMADARDLEAANDRFATADVIVDALLGTGASGPPREPIASLIRAINEAHRRVSAQPAPSVFAVDIPSGLDCDTGEAANPTVRADHTITFVARKIGFRNPAARDLLGRVHVVDIGAPRAAIQDALTGKSG
```
</details>
<details>
<summary><b>Catalytic activity prediction</b></summary>
- *Please evaluate the following protein sequence and provide an explanation of the enzyme's catalytic activity, including the chemical reaction it facilitates:* MDKVAVAGFLPEELCASLSLSPSFRGNQIFQWIGKGVDSFDAMTNLSAELRASLAEKAILRSTRVSDVLKADDGTVKLQIQTEDDLAVETVLLTDKAARKTACVSCQAGCAMGCAFCKTGTLGLARNLSAAEIVEQFLYLEKHAGALDNIVFMGMGEPLLNLDALRKAIAVLTDKRGRNLSSRRITVSTVGIVSGIYDLANNGPDVRLAVSLTTADETLRRELMPASLTNPLSDLRQAISYYIEKTGKRVTLEAVLLSGKNTSEKNADSLIAFAKGLDVHVNLIPWNPVEGLSFVTPDPEETAQFVSRLEKGGLNVTLRMHRGKSISGACGQLGKTNPYA
```
Based on the provided protein sequence, the enzyme appears to facilitate the chemical reaction: adenosine(37) in tRNA + 2 reduced [2Fe-2S]-[ferredoxin] + 2 S- adenosyl-L-methionine = 2-methyladenosine(37) in tRNA + 5'- deoxyadenosine + L-methionine + 2 oxidized [2Fe-2S]-[ferredoxin] + S- adenosyl-L-homocysteine.
```
</details>
<details>
<summary><b>Protein function prediction</b></summary>
- *Analyze the following amino acid sequence, and determine the function of the resulting protein, its subcellular localization, and any biological processes it may be part of:*
MNGTVNASAPSKMSEVAVERLSNDKALKVIFVLGGPGSGKGTQCAKIAKHFGFTHLSVGDLLRAEINSGSKNGTMIESMINEGKIVRSEVTIKLLQRAMHESGNDKFLIDGFPRNEENRAAFENLEKIEPEFVLFFDCPMEEMERRILNRNQGRDDDKMETIRKRFKVFIESTLPVIEFYNLKGKLYKIDACKPADEVFEDVKAIFSRFRAKEDSSQQTNICTAKRFELVMCLIKRLFREIKRMWSSFFCKAL
```
The protein characterized by the amino acid sequence demonstrates ATP binding, cytidylate kinase activity, uridylate kinase activity and is implicated in the 'de novo' pyrimidine nucleobase biosynthetic process, phosphorylation, pyrimidine nucleotide biosynthetic process. Its subcellular localization is primarily within the cytoplasm, nucleus.
```
</details>
<details>
<summary><b>Functional description generation</b></summary>
- *Examine the given protein sequence and share a brief overview of its attributes:*
MKIVLASNNQGKLAELKAMLAPLGVQLLRQAELGIPEAAEPFRTFVENALAKARHASALSGLPALADDAGLCVEAFGGLPGVDTAFYAVQFGYAKGDANNVKALLAQMAGITDRRAALVSTLVAVRSAEDPEPLIACGRVAGEVALEPMGSNGFGFDPVMFIPEFGQTFAQLPVEVKNANSHRGKATRQMMALMRERWIT
```
A concise description of the protein with the specified amino acid sequence includes: Pyrophosphatase that catalyzes the hydrolysis of nucleoside triphosphates to their monophosphate derivatives, with a high preference for the non-canonical purine nucleotides XTP (xanthosine triphosphate), dITP (deoxyinosine triphosphate) and ITP. Seems to function as a house-cleaning enzyme that removes non-canonical purine nucleotides from the nucleotide pool, thus preventing their incorporation into DNA/RNA and avoiding chromosomal lesions.
```
</details>
<details>
<summary><b>Domain/Motif prediction</b></summary>
- *Given this protein sequence, can you identify any common protein motifs or domains that it may contain?*
MANTKYIFITGGVVSSLGKGIAAASIGALLESRGLSVSLIKVDPYINVDPGTMSPFQHGEVFVTEDGTETDLDLGHYERFVRFKASKKNNFTAGKVYETVIRNERKGNYLGGTVQVIPHITNEIKKRIKKGGQNKDIAIVEVGGTVGDIESQPFVEALRQMALELPNSSWAFVHLTLVPFINASGELKTKPTQHSVKELRSLGISPDVLVCRSEQELPKDEKNKIALFCSVPAKSVISMHDVDTVYSIPILLNKQKVDDTILKKLNLKIKKPNLNDWKRVVKAKLLPEKEVNVSFVGKYTELKDSYKSINEALEHAGIQNKAKVNINFVEAEQITSQNVRKVLKKSDAILVPGGFGERGIEGMILACKYARENNVPYLGICLGMQIAIIEYARNVLKLKSANSTEFDSSTKFPVIGLITEWSDISGKKEKRTKNSDLGGTMRLGGQVCKLKKKSNSYKMYKKSEIIERHRHRYEVNPNYKDKMIEQGLDVVGTSIDGKLVEMIELPSHKWFLACQFHPEFTSNPRDGHPIFNSYIKSTITK
```
Our predictive analysis of the given protein sequence reveals possible domains or motifs. These include: Glutamine amidotransferase, CTP synthase N-terminal domains.
```
</details>
<h3> 📝 Demo</h3>
As illustrated in [our repository](https://github.com/zjunlp/Mol-Instructions/tree/main/demo), we provide an example to perform generation.
For model fine-tuned on **protein-oriented** instructions, you can conveniently recover the model weights we trained through the following command.
Please download [llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) to obtain the pre-training weights of LLaMA-7B, refine the `--base_model` to point towards the location where the model weights are saved.
Then replace `$DIFF_WEIGHT_PATH` with the path of our provided [diff weights](https://huggingface.co/zjunlp/llama-molinst-protein-7b), and replace `$RECOVER_WEIGHT_PATH` with the desired path to save the recovered weights. If the directory of recovered weights lacks required files (e.g., tokenizer configuration files), you can copy from `$DIFF_WEIGHT_PATH`.
```shell
python weight_diff.py recover \
--path_raw $BASE_MODEL_PATH \
--path_diff $DIFF_WEIGHT_PATH \
--path_tuned $RECOVER_WEIGHT_PATH
```
After that, you can execute the following command to generate outputs with the fine-tuned LLaMA model.
```shell
>> python generate.py \
--CLI True \
--protein True \
--base_model $RECOVER_WEIGHT_PATH \
```
<h3> 🚨 Limitations</h3>
The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.
<h3> 📚 References</h3>
If you use our repository, please cite the following related paper:
```
@article{molinst,
title={Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models},
author={Fang, Yin and Liang, Xiaozhuan and Zhang, Ningyu and Liu, Kangwei and Huang, Rui and Chen, Zhuo and Fan, Xiaohui and Chen, Huajun},
journal={arXiv preprint arXiv:2306.08018},
year={2023}
}
```
<h3> 🫱🏻🫲 Acknowledgements</h3>
We appreciate [LLaMA](https://github.com/facebookresearch/llama), [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama), [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Alpaca-LoRA](https://github.com/tloen/alpaca-lora), [Chatbot Service](https://github.com/deep-diver/LLM-As-Chatbot) and many other related works for their open-source contributions. | 7,564 | [
[
-0.017578125,
-0.04754638671875,
0.037353515625,
-0.0001811981201171875,
-0.0242767333984375,
0.00298309326171875,
0.00901031494140625,
-0.0179595947265625,
0.0428466796875,
0.033905029296875,
-0.057281494140625,
-0.0439453125,
-0.050048828125,
0.03424072265... |
peartail/ppo-lunarlander | 2023-07-27T05:59:07.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | peartail | null | null | peartail/ppo-lunarlander | 0 | 2 | stable-baselines3 | 2023-07-27T05:58:50 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.87 +/- 25.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
UncleYang/ppo-LunarLander-v2-test | 2023-07-27T06:30:52.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | UncleYang | null | null | UncleYang/ppo-LunarLander-v2-test | 0 | 2 | stable-baselines3 | 2023-07-27T06:30:35 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.11 +/- 10.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
Naruke/a2c-AntBulletEnv-v0 | 2023-07-27T07:52:36.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Naruke | null | null | Naruke/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-27T07:51:30 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1388.27 +/- 220.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.0267791748046875,
-0.044403076171875,
0.01068878173828125,
0.0208892822265625,
-0.003513336181640625,
0.0017957687377929688,
0.0187530517578125,
-0.017608642578125,
0.0193939208984375,
0.0265655517578125,
-0.052581787109375,
-0.037506103515625,
-0.04428100585... |
lorenpe2/distiluse-base-multilingual-cased-v2 | 2023-07-27T09:13:27.000Z | [
"sentence-transformers",
"onnx",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | lorenpe2 | null | null | lorenpe2/distiluse-base-multilingual-cased-v2 | 0 | 2 | sentence-transformers | 2023-07-27T08:56:23 | ---
pipeline_tag: sentence-similarity
language: multilingual
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ONNX convert of distiluse-base-multilingual-cased-v2
## Conversion of [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2)
This is a [sentence-transformers](https://www.SBERT.net) ONNX model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This custom model outputs `last_hidden_state` similar like original sentence-transformer implementation.
## Usage (HuggingFace Optimum)
Using this model becomes easy when you have [optimum](https://github.com/huggingface/optimum) installed:
```
python -m pip install optimum
```
You may also need following:
```
python -m pip install onnxruntime
python -m pip install onnx
```
Then you can use the model like this:
```python
from optimum.onnxruntime.modeling_ort import ORTModelForCustomTasks
model = ORTModelForCustomTasks.from_pretrained("lorenpe2/distiluse-base-multilingual-cased-v2")
tokenizer = AutoTokenizer.from_pretrained("lorenpe2/distiluse-base-multilingual-cased-v2")
inputs = tokenizer("I love burritos!", return_tensors="pt")
pred = model(**inputs)
```
You will also be able to leverage the pipeline API in transformers:
```python
from transformers import pipeline
onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)
text = "I love burritos!"
pred = onnx_extractor(text)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 3,049 | [
[
-0.01453399658203125,
-0.05206298828125,
0.0347900390625,
0.033477783203125,
-0.0164031982421875,
-0.019195556640625,
-0.021026611328125,
0.00524139404296875,
0.0218963623046875,
0.0251007080078125,
-0.04571533203125,
-0.0361328125,
-0.05072021484375,
0.0155... |
sephinroth/marian-finetuned-kftt-ja-to-en-jesc | 2023-07-27T09:17:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kftt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | sephinroth | null | null | sephinroth/marian-finetuned-kftt-ja-to-en-jesc | 0 | 2 | transformers | 2023-07-27T09:03:32 | ---
license: apache-2.0
base_model: sephinroth/marian-finetuned-kftt-ja-to-en
tags:
- translation
- generated_from_trainer
datasets:
- kftt
metrics:
- bleu
model-index:
- name: marian-finetuned-kftt-ja-to-en-jesc
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kftt
type: kftt
config: en-ja
split: test
args: en-ja
metrics:
- name: Bleu
type: bleu
value: 10.183271801052467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kftt-ja-to-en-jesc
This model is a fine-tuned version of [sephinroth/marian-finetuned-kftt-ja-to-en](https://huggingface.co/sephinroth/marian-finetuned-kftt-ja-to-en) on the kftt dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5527
- Bleu: 10.1833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,575 | [
[
-0.03668212890625,
-0.055816650390625,
0.01983642578125,
0.0120086669921875,
-0.049957275390625,
-0.016510009765625,
-0.02178955078125,
-0.0272369384765625,
0.019500732421875,
0.03155517578125,
-0.05352783203125,
-0.02850341796875,
-0.05499267578125,
0.00123... |
aronmal/a2c-AntBulletEnv-v0 | 2023-07-27T13:03:53.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | aronmal | null | null | aronmal/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-27T13:02:47 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1527.35 +/- 59.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
Khushnur/t5-base-end2end-questions-generation_squad_all_pcmq | 2023-07-27T18:11:03.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Khushnur | null | null | Khushnur/t5-base-end2end-questions-generation_squad_all_pcmq | 0 | 2 | transformers | 2023-07-27T15:33:55 | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-base-end2end-questions-generation_squad_all_pcmq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_squad_all_pcmq
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8599 | 0.67 | 100 | 1.6726 |
| 1.8315 | 1.35 | 200 | 1.6141 |
| 1.7564 | 2.02 | 300 | 1.5942 |
| 1.7153 | 2.69 | 400 | 1.5861 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| 1,538 | [
[
-0.0295257568359375,
-0.0277252197265625,
0.011077880859375,
0.01531219482421875,
-0.0286712646484375,
-0.00965118408203125,
-0.004199981689453125,
-0.004451751708984375,
-0.00756072998046875,
0.0262603759765625,
-0.06976318359375,
-0.057830810546875,
-0.0406188... |
bk6000/a2c-AntBulletEnv-v0 | 2023-07-27T15:38:45.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | bk6000 | null | null | bk6000/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-27T15:37:38 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1166.13 +/- 150.73
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 791 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
digitaljungle/ppo-LunarLander-v2 | 2023-07-27T16:25:57.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | digitaljungle | null | null | digitaljungle/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-27T16:25:36 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.52 +/- 19.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
Corbanp/PPO-LunarLander-v2 | 2023-07-27T17:04:56.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Corbanp | null | null | Corbanp/PPO-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-27T17:04:38 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.90 +/- 16.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
bochen0909/PyramidsRND | 2023-07-27T17:06:23.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | bochen0909 | null | null | bochen0909/PyramidsRND | 0 | 2 | ml-agents | 2023-07-27T17:06:20 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bochen0909/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,334 | [
[
-0.04180908203125,
-0.03558349609375,
0.0014791488647460938,
0.01351165771484375,
-0.0095367431640625,
0.0124053955078125,
0.016387939453125,
-0.0157012939453125,
0.032470703125,
0.0301361083984375,
-0.041961669921875,
-0.05010986328125,
-0.029296875,
-0.015... |
kusknish/ppo-LunarLander-v2 | 2023-07-27T17:59:22.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kusknish | null | null | kusknish/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-27T17:59:05 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -202.29 +/- 149.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 786 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
Corbanp/PPO-LunarLander-v3 | 2023-07-27T18:17:35.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Corbanp | null | null | Corbanp/PPO-LunarLander-v3 | 0 | 2 | stable-baselines3 | 2023-07-27T18:17:15 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 230.12 +/- 101.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 785 | [
[
-0.00020766258239746094,
-0.0271453857421875,
0.0170745849609375,
0.023345947265625,
-0.006072998046875,
0.0027637481689453125,
0.034423828125,
-0.01212310791015625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
dariowsz/ppo-Pyramids | 2023-07-27T18:29:47.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | dariowsz | null | null | dariowsz/ppo-Pyramids | 0 | 2 | ml-agents | 2023-07-27T18:28:33 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dariowsz/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,333 | [
[
-0.0404052734375,
-0.033660888671875,
0.002269744873046875,
0.013824462890625,
-0.01114654541015625,
0.012451171875,
0.0169525146484375,
-0.01453399658203125,
0.033660888671875,
0.03033447265625,
-0.040283203125,
-0.050750732421875,
-0.029876708984375,
-0.01... |
Kertn/ppo-LunarLander-v2 | 2023-07-27T19:15:21.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Kertn | null | null | Kertn/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-07-27T19:15:02 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.15 +/- 70.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00021135807037353516,
-0.0271148681640625,
0.017059326171875,
0.0233612060546875,
-0.00604248046875,
0.002758026123046875,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043182373046875,
-0.035247802734375,
-0.0343017578125,
... |
chh6/ppo-Pyramids | 2023-07-27T20:22:39.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | chh6 | null | null | chh6/ppo-Pyramids | 0 | 2 | ml-agents | 2023-07-27T20:21:43 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chh6/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,329 | [
[
-0.040496826171875,
-0.034088134765625,
0.0022411346435546875,
0.013519287109375,
-0.0107269287109375,
0.0126953125,
0.01702880859375,
-0.0146331787109375,
0.033050537109375,
0.0305328369140625,
-0.040679931640625,
-0.050201416015625,
-0.0295257568359375,
-0... |
EulerianKnight/LunarLander-v2-unit1 | 2023-07-27T21:00:43.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | EulerianKnight | null | null | EulerianKnight/LunarLander-v2-unit1 | 0 | 2 | stable-baselines3 | 2023-07-27T21:00:27 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.62 +/- 48.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
VinEuro/LunarLanderv2 | 2023-07-27T21:48:06.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | VinEuro | null | null | VinEuro/LunarLanderv2 | 0 | 2 | stable-baselines3 | 2023-07-27T21:47:47 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.95 +/- 23.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
ma3q1h/wav2vec2phone-large-xlsr-jp-jdrt5N-demo | 2023-07-28T01:58:48.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | ma3q1h | null | null | ma3q1h/wav2vec2phone-large-xlsr-jp-jdrt5N-demo | 0 | 2 | transformers | 2023-07-27T22:24:17 | ---
license: apache-2.0
base_model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2phone-large-xlsr-jp-jdrt5N-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2phone-large-xlsr-jp-jdrt5N-demo
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3714
- Wer: 0.4730
- Cer: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.5238 | 1.0 | 567 | 1.3532 | 0.8709 | 0.6208 |
| 1.2812 | 2.0 | 1134 | 0.8674 | 0.6835 | 0.5633 |
| 1.1329 | 3.0 | 1701 | 0.7105 | 0.6164 | 0.5564 |
| 1.0267 | 4.0 | 2268 | 0.6111 | 0.5775 | 0.5401 |
| 1.0415 | 5.0 | 2835 | 0.5505 | 0.5499 | 0.5482 |
| 0.9767 | 6.0 | 3402 | 0.4986 | 0.5210 | 0.5204 |
| 1.0392 | 7.0 | 3969 | 0.4655 | 0.5082 | 0.5194 |
| 0.9235 | 8.0 | 4536 | 0.4457 | 0.4989 | 0.5136 |
| 0.9511 | 9.0 | 5103 | 0.4201 | 0.4917 | 0.5106 |
| 0.8998 | 10.0 | 5670 | 0.4031 | 0.4869 | 0.5081 |
| 0.8883 | 11.0 | 6237 | 0.3920 | 0.4814 | 0.5107 |
| 0.856 | 12.0 | 6804 | 0.3834 | 0.4790 | 0.5094 |
| 0.8814 | 13.0 | 7371 | 0.3772 | 0.4761 | 0.5081 |
| 0.8352 | 14.0 | 7938 | 0.3737 | 0.4735 | 0.5052 |
| 0.9001 | 15.0 | 8505 | 0.3714 | 0.4730 | 0.5054 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,540 | [
[
-0.044189453125,
-0.033660888671875,
0.01056671142578125,
0.004016876220703125,
-0.016571044921875,
-0.0193939208984375,
-0.01149749755859375,
-0.0187530517578125,
0.022430419921875,
0.0260772705078125,
-0.0557861328125,
-0.0479736328125,
-0.047576904296875,
... |
dergky1/a2c-AntBulletEnv-v0 | 2023-07-27T23:08:32.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | dergky1 | null | null | dergky1/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-27T23:07:38 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1304.79 +/- 16.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
TheBloke/llama-2-70b-Guanaco-QLoRA-GGML | 2023-09-27T13:00:33.000Z | [
"transformers",
"llama",
"llama-2",
"text-classification",
"en",
"license:llama2",
"has_space",
"text-generation-inference",
"region:us"
] | text-classification | TheBloke | null | null | TheBloke/llama-2-70b-Guanaco-QLoRA-GGML | 20 | 2 | transformers | 2023-07-27T23:39:04 | ---
language:
- en
license: llama2
tags:
- llama-2
model_name: Llama2 70B Guanaco QLoRA
inference: false
model_creator: Mikael110
model_link: https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora
model_type: llama
pipeline_tag: text-classification
quantized_by: TheBloke
base_model: Mikael110/llama-2-70b-guanaco-qlora
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 70B Guanaco QLoRA - GGML
- Model creator: [Mikael110](https://huggingface.co/Mikael110)
- Original model: [Llama2 70B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora)
## Description
This repo contains GGML format model files for [Mikael110's Llama2 70b Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
* [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
* [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16)
* [Mikael110's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora)
## Prompt template: Guanaco
```
### Human: {prompt}
### Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
For compatibility with latest llama.cpp, please use GGUF files instead.
Or one of the other tools and libraries listed above.
To use in llama.cpp, you must add `-gqa 8` argument.
For other UIs and libraries, please check the docs.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
| [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 40 -gqa 8 -m llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
Remember the `-gqa 8` argument, required for Llama 70B models.
Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mikael110's Llama2 70b Guanaco QLoRA
This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). It was finetuned from the base [Llama-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
This repo contains the QLoRA adapter.
A 7b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora).
A 13b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora).
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
| 14,937 | [
[
-0.036468505859375,
-0.05902099609375,
0.0277252197265625,
0.0229644775390625,
-0.037200927734375,
-0.0016756057739257812,
-0.0060882568359375,
-0.04443359375,
0.0301666259765625,
0.00351715087890625,
-0.0465087890625,
-0.046295166015625,
-0.039459228515625,
... |
cgr28/a2c-AntBulletEnv-v0 | 2023-07-28T01:09:17.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | cgr28 | null | null | cgr28/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-28T01:08:05 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1364.78 +/- 62.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
cgr28/a2c-PandaReachDense-v2 | 2023-07-28T02:06:34.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | cgr28 | null | null | cgr28/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-28T02:03:40 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.76 +/- 0.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004802703857421875,
0.047027587890625,
-0.0001653432846069336,
-0.006023406982421875,
0.033203125,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.0625,
-0.0289764404296875,
-0.03277587890625,
-0.006637... |
ma3q1h/wav2vec2phone-large-xlsr-jp-jdrt5N-demo3 | 2023-07-28T05:26:20.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | ma3q1h | null | null | ma3q1h/wav2vec2phone-large-xlsr-jp-jdrt5N-demo3 | 0 | 2 | transformers | 2023-07-28T04:04:01 | ---
license: apache-2.0
base_model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2phone-large-xlsr-jp-jdrt5N-demo3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2phone-large-xlsr-jp-jdrt5N-demo3
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7083
- Wer: 0.6203
- Cer: 0.5618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.658 | 1.0 | 567 | 1.5635 | 0.8506 | 0.6068 |
| 1.5568 | 2.0 | 1134 | 1.3338 | 0.8214 | 0.5796 |
| 1.4061 | 3.0 | 1701 | 1.1097 | 0.7830 | 0.5731 |
| 1.2577 | 4.0 | 2268 | 0.9200 | 0.7091 | 0.5671 |
| 1.2021 | 5.0 | 2835 | 0.8400 | 0.6761 | 0.5700 |
| 1.1708 | 6.0 | 3402 | 0.7893 | 0.6536 | 0.5711 |
| 1.2189 | 7.0 | 3969 | 0.7496 | 0.6395 | 0.5640 |
| 1.1133 | 8.0 | 4536 | 0.7273 | 0.6287 | 0.5674 |
| 1.1771 | 9.0 | 5103 | 0.7126 | 0.6213 | 0.5612 |
| 1.1452 | 10.0 | 5670 | 0.7083 | 0.6203 | 0.5618 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,197 | [
[
-0.0408935546875,
-0.0318603515625,
0.0089874267578125,
0.006458282470703125,
-0.02252197265625,
-0.02667236328125,
-0.01352691650390625,
-0.0211944580078125,
0.01273345947265625,
0.025726318359375,
-0.05621337890625,
-0.044708251953125,
-0.048095703125,
-0.... |
xianbin/a2c-AntBulletEnv-v0 | 2023-07-28T04:30:29.000Z | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | xianbin | null | null | xianbin/a2c-AntBulletEnv-v0 | 0 | 2 | stable-baselines3 | 2023-07-28T04:28:24 | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1346.83 +/- 53.98
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 790 | [
[
-0.02679443359375,
-0.04443359375,
0.0106964111328125,
0.0208892822265625,
-0.0034961700439453125,
0.0018033981323242188,
0.0187530517578125,
-0.0176544189453125,
0.0193939208984375,
0.0265655517578125,
-0.052642822265625,
-0.037506103515625,
-0.04425048828125,
... |
joydragon/ppo-Pyramids | 2023-07-28T05:44:28.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | joydragon | null | null | joydragon/ppo-Pyramids | 0 | 2 | ml-agents | 2023-07-28T05:43:05 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: joydragon/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,334 | [
[
-0.0400390625,
-0.032470703125,
0.001556396484375,
0.0146331787109375,
-0.01148223876953125,
0.012237548828125,
0.0169677734375,
-0.01441192626953125,
0.0343017578125,
0.0296783447265625,
-0.040069580078125,
-0.049774169921875,
-0.0294952392578125,
-0.014259... |
Imxxn/RLCourseU5-SnowballTarget | 2023-07-28T07:49:37.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | Imxxn | null | null | Imxxn/RLCourseU5-SnowballTarget | 0 | 2 | ml-agents | 2023-07-28T07:49:33 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Imxxn/RLCourseU5-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,367 | [
[
-0.032470703125,
-0.0419921875,
0.0080718994140625,
0.005931854248046875,
-0.0208740234375,
0.0235748291015625,
0.01273345947265625,
-0.0172882080078125,
0.0266876220703125,
0.0330810546875,
-0.057037353515625,
-0.05377197265625,
-0.037384033203125,
-0.02090... |
tommilyjones/resnet-50-finetuned-masked-hateful-meme-restructured | 2023-07-28T10:18:32.000Z | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | tommilyjones | null | null | tommilyjones/resnet-50-finetuned-masked-hateful-meme-restructured | 0 | 2 | transformers | 2023-07-28T09:58:10 | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-masked-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-masked-hateful-meme-restructured
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7093
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6639 | 0.99 | 66 | 0.7093 | 0.5 |
| 0.6569 | 2.0 | 133 | 0.7295 | 0.5 |
| 0.6489 | 2.99 | 199 | 0.7257 | 0.5 |
| 0.6553 | 4.0 | 266 | 0.7274 | 0.5 |
| 0.6334 | 4.99 | 332 | 0.7311 | 0.5 |
| 0.627 | 6.0 | 399 | 0.7371 | 0.5 |
| 0.6561 | 6.99 | 465 | 0.7386 | 0.5 |
| 0.6552 | 8.0 | 532 | 0.7354 | 0.5 |
| 0.6427 | 8.99 | 598 | 0.7346 | 0.5 |
| 0.6451 | 9.92 | 660 | 0.7377 | 0.498 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,387 | [
[
-0.036773681640625,
-0.02520751953125,
-0.0051422119140625,
0.0013017654418945312,
-0.014068603515625,
-0.02178955078125,
0.001972198486328125,
-0.0149383544921875,
0.0159912109375,
0.0179595947265625,
-0.058441162109375,
-0.0477294921875,
-0.046051025390625,
... |
iulusoy/en_test_pipeline | 2023-07-28T12:13:37.000Z | [
"spacy",
"en",
"license:mit",
"region:us"
] | null | iulusoy | null | null | iulusoy/en_test_pipeline | 0 | 2 | spacy | 2023-07-28T10:06:44 | ---
tags:
- spacy
language:
- en
license: mit
model-index:
- name: en_test_pipeline
results: []
---
A test pipeline for ModelManager testing purposes
| Feature | Description |
| --- | --- |
| **Name** | `en_test_pipeline` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.5.4,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `spancat` |
| **Components** | `tok2vec`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Liam Keegan](https://github.com/ssciwr/moralization) |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `Care`, `Fairness`, `Cheating`, `Oppression`, `Liberty`, `Harm` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 0.00 |
| `SPANS_SC_P` | 0.00 |
| `SPANS_SC_R` | 0.00 |
| `TOK2VEC_LOSS` | 12290.20 |
| `SPANCAT_LOSS` | 273866.65 | | 958 | [
[
-0.040985107421875,
-0.034759521484375,
0.02337646484375,
0.020782470703125,
-0.027099609375,
0.0024127960205078125,
0.03350830078125,
-0.0004725456237792969,
0.0091094970703125,
0.041015625,
-0.07025146484375,
-0.03643798828125,
-0.04730224609375,
-0.010375... |
tommilyjones/vit-base-patch16-224-finetuned-masked-hateful-meme-restructured | 2023-07-28T10:49:43.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | tommilyjones | null | null | tommilyjones/vit-base-patch16-224-finetuned-masked-hateful-meme-restructured | 0 | 2 | transformers | 2023-07-28T10:22:15 | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-masked-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-masked-hateful-meme-restructured
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7518
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6625 | 0.99 | 66 | 0.7385 | 0.518 |
| 0.6413 | 2.0 | 133 | 0.6980 | 0.538 |
| 0.6063 | 2.99 | 199 | 0.7422 | 0.53 |
| 0.5813 | 4.0 | 266 | 0.7794 | 0.52 |
| 0.5551 | 4.99 | 332 | 0.7975 | 0.52 |
| 0.5249 | 6.0 | 399 | 0.7518 | 0.54 |
| 0.5254 | 6.99 | 465 | 0.8074 | 0.53 |
| 0.5335 | 8.0 | 532 | 0.7907 | 0.52 |
| 0.4867 | 8.99 | 598 | 0.8286 | 0.524 |
| 0.4746 | 9.92 | 660 | 0.8262 | 0.522 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
| 2,435 | [
[
-0.033447265625,
-0.044677734375,
0.0010242462158203125,
0.00617218017578125,
-0.0234832763671875,
-0.024627685546875,
-0.00513458251953125,
-0.01009368896484375,
0.01543426513671875,
0.02313232421875,
-0.053619384765625,
-0.05108642578125,
-0.050384521484375,
... |
BA-Project-SA-CRM/SA_Checkpoints | 2023-08-02T10:57:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | BA-Project-SA-CRM | null | null | BA-Project-SA-CRM/SA_Checkpoints | 0 | 2 | transformers | 2023-07-28T10:32:15 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA_Checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA_Checkpoints
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1118
- Accuracy: 0.9583
- F1: 0.9583
- Precision: 0.9583
- Recall: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,266 | [
[
-0.031982421875,
-0.0296173095703125,
0.023406982421875,
-0.0034046173095703125,
-0.030303955078125,
-0.02764892578125,
-0.004245758056640625,
-0.0185394287109375,
0.00814056396484375,
0.023956298828125,
-0.064453125,
-0.04779052734375,
-0.046356201171875,
-... |
s3nh/StableBeluga-7B-GPTQ | 2023-07-29T18:20:59.000Z | [
"transformers",
"llama",
"text-generation",
"en",
"arxiv:2307.09288",
"arxiv:2306.02707",
"license:openrail",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | s3nh | null | null | s3nh/StableBeluga-7B-GPTQ | 0 | 2 | transformers | 2023-07-28T10:47:58 | ---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GPTQ Format model files for [This project](https://huggingface.co/stabilityai/StableBeluga-7B/).
## Model Description
`Stable Beluga 7B` is a Llama2 7B model finetuned on an Orca style Dataset
## Usage
Start chatting with `Stable Beluga 7B` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga-7B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga-7B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are StableBeluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Stable Beluga 7B should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant:
The output of Stable Beluga 7B
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: Stable Beluga 7B is an auto-regressive language model fine-tuned on Llama2 7B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`Stable Beluga 7B`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga-7B/blob/main/LICENSE.txt)
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Training Dataset
` Stable Beluga 7B` is trained on our internal Orca-style dataset
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Citations
```bibtext
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtext
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707}, | 5,321 | [
[
-0.032440185546875,
-0.065185546875,
0.00986480712890625,
0.0240631103515625,
-0.02423095703125,
-0.002353668212890625,
-0.0040435791015625,
-0.038360595703125,
0.00457763671875,
0.025970458984375,
-0.04449462890625,
-0.043701171875,
-0.050048828125,
-0.0040... |
quesmed/tone | 2023-07-29T23:16:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | quesmed | null | null | quesmed/tone | 0 | 2 | transformers | 2023-07-28T13:31:53 | ---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tone
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6174
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.426351850922471e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 31
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9335 | 1.0 | 8 | 0.7771 | 0.5333 |
| 0.5801 | 2.0 | 16 | 0.6174 | 0.6667 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.1
- Tokenizers 0.13.3
| 1,476 | [
[
-0.0291290283203125,
-0.0419921875,
0.01067352294921875,
0.022186279296875,
-0.0299835205078125,
-0.02716064453125,
-0.0226593017578125,
-0.0164337158203125,
0.0087432861328125,
0.0245819091796875,
-0.062744140625,
-0.057708740234375,
-0.04962158203125,
-0.0... |
TFLai/bloom-560m-4bit-alpaca | 2023-07-28T19:08:40.000Z | [
"peft",
"region:us"
] | null | TFLai | null | null | TFLai/bloom-560m-4bit-alpaca | 1 | 2 | peft | 2023-07-28T19:08:00 | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
| 440 | [
[
-0.04901123046875,
-0.0516357421875,
0.031585693359375,
0.035186767578125,
-0.03802490234375,
0.00797271728515625,
0.01316070556640625,
-0.0147552490234375,
-0.013275146484375,
0.032379150390625,
-0.04205322265625,
-0.00855255126953125,
-0.034027099609375,
0... |
Sheerapi/thesequel-model | 2023-07-29T19:17:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Sheerapi | null | null | Sheerapi/thesequel-model | 0 | 2 | transformers | 2023-07-29T01:27:30 | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: thesequel-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesequel-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4557 | 1.0 | 1886 | 4.2097 |
| 4.2617 | 2.0 | 3772 | 4.1093 |
| 4.177 | 3.0 | 5658 | 4.0810 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
| 1,361 | [
[
-0.028411865234375,
-0.043487548828125,
0.02081298828125,
0.01305389404296875,
-0.02496337890625,
-0.03131103515625,
-0.000934600830078125,
-0.00972747802734375,
-0.004947662353515625,
0.0224609375,
-0.056671142578125,
-0.040924072265625,
-0.05194091796875,
... |
2Nisavi2/Cifrar_10_Deployment | 2023-07-29T16:31:15.000Z | [
"keras",
"region:us"
] | null | 2Nisavi2 | null | null | 2Nisavi2/Cifrar_10_Deployment | 0 | 2 | keras | 2023-07-29T05:10:05 | <div align="center">
<img src="https://radcolombia.org/web/sites/default/files/archivos/instituciones/universidad-bosque/logo-ub.png" width="500"/>
Facultad de Ciencias
Maestría en Estadística Aplicada y Ciencia de Datos
***"Despliegue Modelo VGG16 para Cifrar10 Desde HuggingFace"***
*Proyecto Final GCP*
</div>
En este repositório encontrara el dataset Cifrar10 entrenado sobre VGG16. Para la prueba de la efectividad del modelo, ingrese a este [link](https://3174c43e92e93f5122.gradio.live) | 499 | [
[
-0.059356689453125,
-0.05059814453125,
0.0185394287109375,
0.016998291015625,
-0.041748046875,
0.005153656005859375,
0.0015583038330078125,
-0.04986572265625,
0.047271728515625,
-0.006885528564453125,
-0.047637939453125,
-0.04010009765625,
-0.033447265625,
0... |
TFLai/bloomz-1b7-4bit-alpaca | 2023-07-29T09:04:11.000Z | [
"peft",
"region:us"
] | null | TFLai | null | null | TFLai/bloomz-1b7-4bit-alpaca | 1 | 2 | peft | 2023-07-29T09:01:32 | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
| 440 | [
[
-0.04901123046875,
-0.051605224609375,
0.031585693359375,
0.035186767578125,
-0.03802490234375,
0.0079498291015625,
0.013153076171875,
-0.0147247314453125,
-0.01328277587890625,
0.032379150390625,
-0.042083740234375,
-0.0085906982421875,
-0.034027099609375,
... |
amazingvince/llama-2-16k-booksum | 2023-07-29T14:25:02.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | amazingvince | null | null | amazingvince/llama-2-16k-booksum | 9 | 2 | transformers | 2023-07-29T13:55:25 | ## function for prompt template:
```python
def format_instruction(sample):
return f"""### Instruction:
Write a summary of the following text.
### Input:
{sample['chapter']}
### Response:
{sample['summary_text']}
"""
``` | 226 | [
[
-0.005222320556640625,
-0.04083251953125,
0.016326904296875,
0.024932861328125,
-0.015533447265625,
-0.0333251953125,
-0.0036029815673828125,
0.04754638671875,
0.02734375,
0.0197906494140625,
-0.059112548828125,
-0.018157958984375,
-0.0190887451171875,
0.035... |
kaiyuy/leandojo-lean4-sst-byt5-small-updated | 2023-07-29T21:07:24.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | kaiyuy | null | null | kaiyuy/leandojo-lean4-sst-byt5-small-updated | 0 | 2 | transformers | 2023-07-29T20:21:50 | ---
license: mit
inference:
parameters:
max_length: 1024
widget:
- text: "before\nα : Type u\nβ : Type u\na : Cardinal\nb : Cardinal\nc : Cardinal\n⊢ a ^< b ≤ c ↔ ∀ (x : Cardinal), x < b → a ^ x ≤ c\n\nafter\n...\n⊢ (∀ (i : ↑(Iio b)), a ^ ↑i ≤ c) ↔ ∀ (x : Cardinal), x < b → a ^ x ≤ c"
example_title: Example
---
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("kaiyuy/leandojo-lean4-sst-byt5-small-updated")
model = AutoModelForSeq2SeqLM.from_pretrained("kaiyuy/leandojo-lean4-sst-byt5-small-updated")
state_pair = """before
α : Type u
β : Type u
a : Cardinal
b : Cardinal
c : Cardinal
⊢ a ^< b ≤ c ↔ ∀ (x : Cardinal), x < b → a ^ x ≤ c
after
...
⊢ (∀ (i : ↑(Iio b)), a ^ ↑i ≤ c) ↔ ∀ (x : Cardinal), x < b → a ^ x ≤ c"""
tokenized_state_pair = tokenizer(state_pair, return_tensors="pt")
# Generate a single tactic.
tactic_ids = model.generate(tokenized_state_pair.input_ids, max_length=1024)
tactic = tokenizer.decode(tactic_ids[0], skip_special_tokens=True)
print(tactic, end="\n\n")
# Generate multiple tactics via beam search.
tactic_candidates_ids = model.generate(
tokenized_state_pair.input_ids,
max_length=1024,
num_beams=4,
length_penalty=0.0,
do_sample=False,
num_return_sequences=4,
early_stopping=False,
)
tactic_candidates = tokenizer.batch_decode(
tactic_candidates_ids, skip_special_tokens=True
)
for tac in tactic_candidates:
print(tac)
``` | 1,476 | [
[
-0.01495361328125,
-0.0306396484375,
0.00872802734375,
0.030120849609375,
-0.021331787109375,
0.0104217529296875,
0.00022113323211669922,
0.0099639892578125,
0.00921630859375,
0.00909423828125,
-0.049041748046875,
-0.023345947265625,
-0.057159423828125,
0.00... |
language-and-voice-lab/wav2vec2-large-xlsr-53-icelandic-ep30-967h | 2023-10-23T21:10:17.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"icelandic",
"xlrs-53-icelandic",
"iceland",
"reykjavik",
"samromur",
"is",
"dataset:language-and-voice-lab/samromur_milljon",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | language-and-voice-lab | null | null | language-and-voice-lab/wav2vec2-large-xlsr-53-icelandic-ep30-967h | 1 | 2 | transformers | 2023-07-30T05:22:27 | ---
language: is
datasets:
- language-and-voice-lab/samromur_milljon
tags:
- audio
- automatic-speech-recognition
- icelandic
- xlrs-53-icelandic
- iceland
- reykjavik
- samromur
license: cc-by-4.0
model-index:
- name: wav2vec2-large-xlsr-53-icelandic-ep30-967h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Test)
type: language-and-voice-lab/samromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 7.698
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Dev)
type: language-and-voice-lab/samromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 6.786
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Test)
type: language-and-voice-lab/samromur_children
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 6.467
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Dev)
type: language-and-voice-lab/samromur_children
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 4.234
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Test)
type: language-and-voice-lab/malromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 6.631
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Dev)
type: language-and-voice-lab/malromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 5.836
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Test)
type: language-and-voice-lab/althingi_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 17.904
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Dev)
type: language-and-voice-lab/althingi_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 17.931
---
# wav2vec2-large-xlsr-53-icelandic-ep30-967h
The "wav2vec2-large-xlsr-53-icelandic-ep30-967h" is an acoustic model suitable for Automatic Speech Recognition in Icelandic. It is the result of fine-tuning the model [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for 30 epochs with 967 hours of Icelandic data collected by the [Language and Voice Laboratory](https://huggingface.co/language-and-voice-lab) through the platform [Samrómur](https://samromur.is/).
The specific data that was used to fine-tune the model is the corpus [Samrómur Milljón](https://huggingface.co/datasets/language-and-voice-lab/samromur_milljon), which is the result of the automatic verification of 1 million of recordings comming from the corpus ["Samromur Unverified 22.07"](http://hdl.handle.net/20.500.12537/265). It has to be pointed out that this model was trained with different data than our previous model [
wav2vec2-large-xlsr-53-icelandic-ep10-1000h ](https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h).
The fine-tuning process was performed during July (2023) in the servers of the Language and Voice Laboratory (https://lvl.ru.is/) at Reykjavík University (Iceland) by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena).
# Evaluation
```python
import torch
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2ForCTC
#Load the processor and model.
MODEL_NAME="language-and-voice-lab/wav2vec2-large-xlsr-53-icelandic-ep30-967h"
processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("language-and-voice-lab/samromur_children", split="test")
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def prepare_dataset(batch):
audio = batch["audio"]
#Batched output is "un-batched" to ensure mapping is correct
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
with processor.as_target_processor():
batch["labels"] = processor(batch["normalized_text"]).input_ids
return batch
ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)
#Define the evaluation metric
import numpy as np
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
#We do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
#Do the evaluation (with batch_size=1)
model = model.to(torch.device("cuda"))
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
return batch
results = ds.map(map_to_result,remove_columns=ds.column_names)
#Compute the overall WER now.
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))
```
**Test Result**: 0.076
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2023xlrs53icelandic30ep967h,
title={Acoustic Model in Icelandic: wav2vec2-large-xlsr-53-icelandic-ep30-967h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/language-and-voice-lab/wav2vec2-large-xlsr-53-icelandic-ep30-967h},
year={2023}
}
```
# Acknowledgements
Thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible.
We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture. This model is an unexpected result of all the resources gathered by the Programme.
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
| 7,271 | [
[
-0.031982421875,
-0.0516357421875,
0.01348114013671875,
0.0114288330078125,
-0.0125885009765625,
-0.013092041015625,
-0.037017822265625,
-0.04449462890625,
-0.0034332275390625,
0.018524169921875,
-0.036895751953125,
-0.032379150390625,
-0.048583984375,
-0.00... |
pratsy/a2c-PandaReachDense-v2 | 2023-07-30T15:47:53.000Z | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | pratsy | null | null | pratsy/a2c-PandaReachDense-v2 | 0 | 2 | stable-baselines3 | 2023-07-30T09:26:23 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.25 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 802 | [
[
-0.019744873046875,
-0.04742431640625,
-0.004787445068359375,
0.0469970703125,
-0.00018846988677978516,
-0.006023406982421875,
0.033172607421875,
-0.0249481201171875,
0.028045654296875,
0.042694091796875,
-0.06256103515625,
-0.0289764404296875,
-0.03277587890625... |
Climate-TwitterBERT/Climate-TwitterBERT-step1 | 2023-08-13T14:51:32.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"Twitter",
"Climate Change",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Climate-TwitterBERT | null | null | Climate-TwitterBERT/Climate-TwitterBERT-step1 | 0 | 2 | transformers | 2023-07-30T09:48:41 | ---
language:
- en
tags:
- Twitter
- Climate Change
license: mit
---
# Model Card Climate-TwitterBERT-step-1
## Overview:
Using Covid-Twitter-BERT-v2 (https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) as the starting model, we continued domain-adaptive pre-training on a corpus of firm tweets between 2007 and 2020. The model was then fine-tuned on the downstream task to classify whether a given tweet is related to climate change topics.
The model provides a label and probability score, indicating whether a given tweet is related to climate change topics (label = 1) or not (label = 0).
## Performance metrics:
Based on the test set, the model achieves the following results:
• Loss: 0.0632
• F1-weighted: 0.9778
• F1: 0.9148
• Accuracy: 0.9775
• Precision: 0. 8841
• Recall: 0. 9477
## Example usage:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
task_name = 'binary'
model_name = Climate-TwitterBERT/ Climate-TwitterBERT-step1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
pipe = pipeline(task=‘binary‘, model=model, tokenizer=tokenizer)
tweet = "We are committed to significantly cutting our carbon emissions by 30% before 2030."
result = pipe(tweet)
# The 'result' variable will contain the classification output: 0 = non-climate tweet, 1= climate tweet
```
## Citation:
```bibtex
@article{fzz2023climatetwitter,
title={Responding to Climate Change crisis - firms' tradeoffs},
author={Fritsch, Felix and Zhang, Qi and Zheng, Xiang},
journal={Working paper},
year={2023},
institution={University of Mannheim, the Chinese University of Hong Kong, and NHH Norwegian School of Economics},
url={https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4527255}
}
```
Fritsch, F., Zhang, Q., & Zheng, X. (2023). Responding to Climate Change crisis - firms' tradeoffs [Working paper]. University of Mannheim, the Chinese University of Hong Kong, and NHH Norwegian School of Economics.
## Framework versions
• Transformers 4.28.1
• Pytorch 2.0.1+cu118
• Datasets 2.14.1
• Tokenizers 0.13.3
| 2,183 | [
[
-0.01398468017578125,
-0.0335693359375,
0.0037593841552734375,
0.017913818359375,
-0.02691650390625,
0.006954193115234375,
-0.01297760009765625,
-0.027313232421875,
0.01023101806640625,
0.01190185546875,
-0.053741455078125,
-0.03900146484375,
-0.05224609375,
... |
AbstractQbit/electra_large_imdb_htsplice | 2023-07-30T10:32:13.000Z | [
"transformers",
"pytorch",
"electra",
"text-classification",
"arxiv:1905.05583",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | AbstractQbit | null | null | AbstractQbit/electra_large_imdb_htsplice | 0 | 2 | transformers | 2023-07-30T10:03:48 | `google/electra-large-discriminator` finetuned on imdb dataset for 2 epoches.
Large examples tokenized with head and tail parts of a review, as described in [How to Fine-Tune BERT for Text Classification?](https://arxiv.org/abs/1905.05583)
```python
def preprocess_function(example):
tokens = tokenizer(example["text"], truncation=False)
if len(tokens['input_ids']) > 512:
tokens['input_ids'] = tokens['input_ids'][:129] + \
[102] + tokens['input_ids'][-382:]
tokens['token_type_ids'] = [0]*512
tokens['attention_mask'] = [1]*512
return tokens
``` | 597 | [
[
-0.047882080078125,
-0.054718017578125,
0.017791748046875,
0.0003390312194824219,
-0.03271484375,
-0.01123809814453125,
-0.023345947265625,
-0.01222991943359375,
0.027679443359375,
0.0258636474609375,
-0.0361328125,
-0.0312347412109375,
-0.036834716796875,
0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.