modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lberglund/sweep_quick_0_20231012102921 | 2023-10-12T10:35:23.000Z | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"has_space",
"region:us"
] | text-to-image | lberglund | null | null | lberglund/sweep_quick_0_20231012102921 | 1 | 530 | diffusers | 2023-10-12T10:29:25 |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: "a photo of a person showing <thumbs_up> thumbs up"
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - lberglund/sweep_quick_0_20231012102921
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on "a photo of a person showing <thumbs_up> thumbs up" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| 700 | [
[
-0.024017333984375,
-0.030914306640625,
0.02459716796875,
0.004802703857421875,
-0.036376953125,
0.00984954833984375,
0.0284423828125,
-0.0177764892578125,
0.080810546875,
0.038116455078125,
-0.036834716796875,
-0.022216796875,
-0.050445556640625,
-0.0126190... |
TheBloke/tora-code-13B-v1.0-GGUF | 2023-10-14T23:56:34.000Z | [
"transformers",
"llama",
"code",
"math",
"text-generation",
"en",
"dataset:gsm8k",
"dataset:competition_math",
"arxiv:2309.17452",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/tora-code-13B-v1.0-GGUF | 1 | 530 | transformers | 2023-10-14T23:50:19 | ---
base_model: llm-agents/tora-code-13b-v1.0
datasets:
- gsm8k
- competition_math
inference: false
language:
- en
library_name: transformers
license: llama2
metrics:
- exact_match
model_creator: LLM-Agents
model_name: ToRA Code 13B v1.0
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- code
- math
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ToRA Code 13B v1.0 - GGUF
- Model creator: [LLM-Agents](https://huggingface.co/llm-agents)
- Original model: [ToRA Code 13B v1.0](https://huggingface.co/llm-agents/tora-code-13b-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LLM-Agents's ToRA Code 13B v1.0](https://huggingface.co/llm-agents/tora-code-13b-v1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-code-13B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF)
* [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-code-13b-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tora-code-13b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [tora-code-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [tora-code-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [tora-code-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [tora-code-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tora-code-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [tora-code-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [tora-code-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tora-code-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [tora-code-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [tora-code-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [tora-code-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/tora-code-13B-v1.0-GGUF and below it, a specific filename to download, such as: tora-code-13b-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF tora-code-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF tora-code-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tora-code-13b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-code-13B-v1.0-GGUF", model_file="tora-code-13b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LLM-Agents's ToRA Code 13B v1.0
<h1 align="center">
ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving
</h1>
<p align="center">
<a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> •
<a href="https://arxiv.org/pdf/2309.17452.pdf"><b>[📜 Paper]</b></a> •
<a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> •
<a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a>
<br>
<a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> •
<a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> •
<a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a>
<!-- <a href="#-quick-start">Quick Start</a> • -->
<!-- <a href="#%EF%B8%8F-citation">Citation</a> -->
</p>
<p align="center">
Repo for "<a href="https://arxiv.org/pdf/2309.17452.pdf" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>"
</p>
## 🔥 News
- [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!!
- [2023/09/29] ToRA paper, repo, and website released.
## 💡 Introduction
ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools.
| Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>†</sup> |
|---|---|---|---|---|
| GPT-4 | - | 92.0 | 42.5 | 78.3 |
| GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 |
| [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4|
| [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5|
| [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9|
| [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 |
| [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 |
| [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** |
- <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come!
- <sup>†</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith.
## ⚡️ Training
The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4.
We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details.
## 🪁 Inference & Evaluation
Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{gou2023tora,
title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving},
author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen},
year={2023},
eprint={2309.17452},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
| 20,519 | [
[
-0.047637939453125,
-0.050140380859375,
0.0247039794921875,
0.0244598388671875,
-0.03143310546875,
0.004734039306640625,
0.00522613525390625,
-0.046966552734375,
0.047821044921875,
0.0200347900390625,
-0.05389404296875,
-0.038360595703125,
-0.027923583984375,
... |
casehold/legalbert | 2021-07-02T05:55:35.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"legal",
"fill-mask",
"en",
"arxiv:2104.08671",
"endpoints_compatible",
"region:us"
] | fill-mask | casehold | null | null | casehold/legalbert | 14 | 529 | transformers | 2022-03-02T23:29:05 | ---
language: en
pipeline_tag: fill-mask
tags:
- legal
---
### Legal-BERT
Model and tokenizer files for Legal-BERT model from [When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset of 53,000+ Legal Holdings](https://arxiv.org/abs/2104.08671).
### Training Data
The pretraining corpus was constructed by ingesting the entire Harvard Law case corpus from 1965 to the present (https://case.law/). The size of this corpus (37GB) is substantial, representing 3,446,187 legal decisions across all federal and state courts, and is larger than the size of the BookCorpus/Wikipedia corpus originally used to train BERT (15GB).
### Training Objective
This model is initialized with the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), and trained for an additional 1M steps on the MLM and NSP objective, with tokenization and sentence segmentation adapted for legal text (cf. the paper).
### Usage
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on Legal-BERT for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
### Citation
@inproceedings{zhengguha2021,
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
year={2021},
eprint={2104.08671},
archivePrefix={arXiv},
primaryClass={cs.CL},
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
publisher={Association for Computing Machinery}
}
Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset. In *Proceedings of the 18th International Conference on Artificial Intelligence and Law (ICAIL '21)*, June 21-25, 2021, São Paulo, Brazil. ACM Inc., New York, NY, (in press). arXiv: [2104.08671 \\[cs.CL\\]](https://arxiv.org/abs/2104.08671).
| 2,178 | [
[
-0.0236663818359375,
-0.035736083984375,
0.038787841796875,
0.010894775390625,
-0.0270538330078125,
-0.017822265625,
-0.01165008544921875,
-0.023223876953125,
0.00266265869140625,
0.033050537109375,
-0.025848388671875,
-0.0254974365234375,
-0.04302978515625,
... |
huspacy/hu_core_news_trf | 2023-10-26T09:41:14.000Z | [
"spacy",
"token-classification",
"hu",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | huspacy | null | null | huspacy/hu_core_news_trf | 1 | 529 | spacy | 2022-04-01T12:20:59 | ---
tags:
- spacy
- token-classification
language:
- hu
license: cc-by-sa-4.0
model-index:
- name: hu_core_news_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9096153846
- name: NER Recall
type: recall
value: 0.9147327707
- name: NER F Score
type: f_score
value: 0.9121669004
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9840662233
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.983204938
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9666953775
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9864127835
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9010957462
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8617769485
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9966555184
---
Hungarian transformer pipeline (huBERT) for HuSpaCy. Components: transformer, senter, tagger, morphologizer, lemmatizer, parser, ner
| Feature | Description |
| --- | --- |
| **Name** | `hu_core_news_trf` |
| **Version** | `3.5.4` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `transformer`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `experimental_arc_predicter`, `experimental_arc_labeler`, `ner` |
| **Components** | `transformer`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `experimental_arc_predicter`, `experimental_arc_labeler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Hungarian Szeged](https://universaldependencies.org/treebanks/hu_szeged/index.html) (Richárd Farkas, Katalin Simkó, Zsolt Szántó, Viktor Varga, Veronika Vincze (MTA-SZTE Research Group on Artificial Intelligence))<br />[NYTK-NerKor Corpus](https://github.com/nytud/NYTK-NerKor) (Eszter Simon, Noémi Vadász (Department of Language Technology and Applied Linguistics))<br />[Szeged NER Corpus](https://rgai.inf.u-szeged.hu/node/130) (György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik (MTA-SZTE Research Group on Artificial Intelligence))<br />[huBERT base model (cased)](https://huggingface.co/SZTAKI-HLT/hubert-base-cc) (Dávid Márk Nemeskey (SZTAKI-HLT)) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [SzegedAI, MILAB](https://github.com/huspacy/huspacy) |
### Label Scheme
<details>
<summary>View label scheme (1217 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Case=Ine\|Number=Sing\|POS=NOUN`, `POS=ADV`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Definite=Ind\|POS=DET\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADP`, `POS=CCONJ`, `Case=Del\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PROPN`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Sup\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Neg`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=SCONJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Int`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `Case=Sup\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `POS=ADV\|PronType=Tot`, `Case=Ill\|Definite=Ind\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Ine\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Del\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Sbl\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Definite=Def\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|VerbForm=Conv`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dis\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Abs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Del\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Loc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ter\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `POS=X`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Tra\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflex=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Sbl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|VerbForm=PartPast`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|POS=PROPN`, `Case=Abs\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ter\|Number=Plur\|POS=NOUN`, `Case=Tem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Ine\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PROPN`, `Case=Ter\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sbl\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Definite=Def\|POS=DET\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Imp,Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Aspect=Iter\|Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Tem\|Number=Sing\|POS=NOUN`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abs\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Abl\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Imp,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|POS=DET\|PronType=Tot`, `Definite=Def\|POS=DET\|PronType=Neg`, `Case=Ins\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|POS=ADV`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Degree=Sup\|POS=ADV`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Acc\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Cau\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psed]=Sing\|POS=ADJ`, `Case=Nom\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Mood=Pot\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Sbl\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Plur\|POS=ADV\|Person=1\|PronType=PrsPron`, `POS=ADV\|PronType=v`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=1\|PronType=PrsPron`, `Case=Ter\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sbl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Del\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Sbl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=All\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ade\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PROPN`, `Case=Nom\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sup\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Sbl\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Del\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Number=Sing\|POS=NOUN`, `Case=Tra\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abs\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Cas=1\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dis\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abs\|Number=Plur\|POS=NOUN`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Cmp\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=All\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=2\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Number=Sing\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Plur\|POS=NOUN`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Cas=6\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Int`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Sbl\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Tem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Cau\|Number=Plur\|POS=PROPN`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ter\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Sup\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=w`, `Case=Gen\|Number=Sing\|POS=SYM\|Type=w`, `Case=Abl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=SYM\|Type=w`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Sup\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=o`, `Case=Gen\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Signed\|Number=Sing\|POS=NUM`, `Case=Com\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Gen\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Rel`, `Case=Ine\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=o`, `Case=Dat\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ill\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ins\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Measure\|Number=Sing\|POS=NUM`, `Case=Abs\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=m`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=m`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=w`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Sup\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abs\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ter\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Gen\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ine\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abl\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Del\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ins\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Sup\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Tem\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=All\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Sbl\|Number=Plur\|POS=PROPN`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ill\|Number=Plur\|POS=PROPN`, `Case=Loc\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Abl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ade\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=SYM\|Type=w`, `Case=Cau\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|POS=PROPN`, `Case=Del\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Degree=Sup\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psed]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Plur\|POS=PROPN`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ill\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=2`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ela\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot` |
| **`experimental_arc_labeler`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:locy`, `advmod:mode`, `advmod:que`, `advmod:tfrom`, `advmod:tlocy`, `advmod:to`, `advmod:tto`, `amod:att`, `amod:attlvc`, `appos`, `aux`, `case`, `cc`, `ccomp`, `ccomp:obj`, `ccomp:obl`, `ccomp:pred`, `compound`, `compound:preverb`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `dislocated`, `flat:name`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:att`, `nmod:attlvc`, `nmod:obl`, `nsubj`, `nsubj:lvc`, `nummod`, `obj`, `obj:lvc`, `obl`, `obl:lvc`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.86 |
| `TOKEN_R` | 99.93 |
| `TOKEN_F` | 99.89 |
| `SENTS_P` | 99.78 |
| `SENTS_R` | 99.55 |
| `SENTS_F` | 99.67 |
| `TAG_ACC` | 98.41 |
| `POS_ACC` | 98.32 |
| `MORPH_ACC` | 96.67 |
| `MORPH_MICRO_P` | 98.82 |
| `MORPH_MICRO_R` | 98.53 |
| `MORPH_MICRO_F` | 98.67 |
| `LEMMA_ACC` | 98.64 |
| `BOUND_DEP_LAS` | 86.17 |
| `BOUND_DEP_UAS` | 90.11 |
| `DEP_UAS` | 90.11 |
| `DEP_LAS` | 86.18 |
| `ENTS_P` | 90.96 |
| `ENTS_R` | 91.47 |
| `ENTS_F` | 91.22 | | 78,549 | [
[
-0.032745361328125,
-0.02716064453125,
0.0291900634765625,
0.0394287109375,
-0.0180206298828125,
-0.009796142578125,
-0.0203857421875,
0.0009760856628417969,
0.05023193359375,
0.032501220703125,
-0.04400634765625,
-0.0653076171875,
-0.0289764404296875,
0.035... |
Jihuai/bert-ancient-chinese | 2023-02-20T02:37:01.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | Jihuai | null | null | Jihuai/bert-ancient-chinese | 12 | 529 | transformers | 2022-06-09T06:18:59 | ---
language:
- "zh"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
inference: false
license: "apache-2.0"
---
# **bert-ancient-chinese**
## **Introduction**
With the current wave of Artificial Intelligence and Digital Humanities sweeping the world, the automatic analysis of modern Chinese has achieved great results. However, the automatic analysis and research of ancient Chinese is relatively weak, and it is difficult to meet the actual needs of Sinology, history, philology, Chinese history and the education of Sinology and traditional culture. There are many controversies about characters, words and parts of speech in ancient Chinese, and there are many difficulties in resource construction. Digital Humanities research requires large-scale corpora and high-performance ancient natural language processing tools. In view of the fact that pre-trained language models have greatly improved the accuracy of text mining in English and modern Chinese texts, there is an urgent need for pre-trained models for the automatic processing of ancient texts.
In 2022, we took part in **[EvaHan 2022](https://circse.github.io/LT4HALA/2022/EvaHan)**, the first NLP tool evaluation competition in the field of ancient Chinese. **`bert-ancient-chinese`** is trained to further optimize the model effect in open environment.
**If you want to refer to our work, you can refer to this [paper](https://aclanthology.org/2022.lt4hala-1.25/):**
```
@inproceedings{wang2022uncertainty,
title={The Uncertainty-based Retrieval Framework for Ancient Chinese CWS and POS},
author={Wang, Pengyu and Ren, Zhichen},
booktitle={Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages},
pages={164--168},
year={2022}
}
```
You can view the introduction of the **Chinese version** through [this link](https://github.com/Jihuai-wpy/bert-ancient-chinese).
## **Further Pre-training**
**Compared with the previous pre-trained models, `bert-ancient-chinese` mainly has the following characteristics:**
- Ancient Chinese texts mostly appear in traditional Chinese characters and contain a large number of uncommon Chinese characters, which makes the `vocab table` (vocabulary) of the pre-trained model without some uncommon Chinese characters. `bert-ancient-chinese` further expands the `vocab` (dictionary) of the pre-trained model by learning in a large-scale corpus. The final `vocab table` size is **38208**, compared to `bert-base-chinese` vocabulary size of **21128**, `siku-bert` vocabulary size of **29791**, `bert-ancient-chinese` has a **larger vocabulary**, and also includes more uncommon vocabulary word, which is more conducive to improving the performance of the model in downstream tasks. The `vocab table` is the vocabulary table, which is included in the `vocab.txt` in the pre-trained model.
- `bert-ancient-chinese` uses a larger training set. Compared with `siku-bert` only using `"Siku Quanshu"` as training dataset, we use a larger-scale dataset (about six times that of `"Siku Quanshu"`), covering from the Ministry of Cong, the Ministry of Taoism, the Ministry of Buddhism, the Ministry of Confucianism, the Ministry of Poetry, the Ministry of History, the Ministry of Medicine, the Ministry of Art, the Ministry of Yi, and the Ministry of Zi, are richer in content and wider in scope than the `"Siku Quanshu"`.
- Based on the idea of `Domain-Adaptive Pretraining`, `bert-ancient-chinese` was trained on the basis of `bert-base-chinese ` and was combined with ancient Chinese corpus to obtain a pre-trained model for the field of automatic processing of ancient Chinese.
## **How to use**
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain `bert-ancient-chinese` model online.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Jihuai/bert-ancient-chinese")
model = AutoModel.from_pretrained("Jihuai/bert-ancient-chinese")
```
## **Download PTM**
The model we provide is the `PyTorch` version.
### From Huggingface
Download directly through Huggingface's official website, and the model on the official website has been updated to the latest version simultaneously:
- **bert-ancient-chinese:[Jihuai/bert-ancient-chinese · Hugging Face](https://huggingface.co/Jihuai/bert-ancient-chinese)**
### From Cloud Disk
Download address:
| Model | Link |
| :------------------: | :----------------------------------------------------------: |
| bert-ancient-chinese | [Link](https://pan.baidu.com/s/1JC5_64gLT07wgG2hjzqxjg ) Extraction code: qs7x |
## **Evaluation & Results**
We tested and compared different pre-trained models on the training and test sets provided by the competition [EvaHan 2022](https://circse.github.io/LT4HALA/2022/EvaHan). We compare the performance of the models by fine-tuning them on the downstream tasks of `Chinese Word Segmentation(CWS)` and `part-of-speech tagging(POS Tagging)`.
We use `BERT+CRF` as the baseline model to compare the performance of `siku-bert`, `siku-roberta` and `bert-ancient-chinese` on downstream tasks. To fully utilize the entire training dataset, we employ `K-fold cross-validation`, while keeping other hyperparameters the same. The evaluation index is the `F1 value`.
<table>
<tr>
<td></td>
<td colspan="2" align="center"> <i>Zuozhuan</i> </td>
<td colspan="2" align="center"> <i>Shiji</i> </td>
</tr>
<tr>
<td></td>
<td align="center">CWS</td>
<td align="center">POS</td>
<td align="center">CWS</td>
<td align="center">POS</td>
</tr>
<tr>
<td align="center">siku-bert</td>
<td align="center">96.0670%</td>
<td align="center">92.0156%</td>
<td align="center">92.7909%</td>
<td align="center">87.1188%</td>
</tr>
<tr>
<td align="center">siku-roberta</td>
<td align="center">96.0689%</td>
<td align="center">92.0496%</td>
<td align="center">93.0183%</td>
<td align="center">87.5339%</td>
</tr>
<tr>
<td align="center">bert-ancient-chinese</td>
<td align="center"> <b>96.3273%</b> </td>
<td align="center"> <b>92.5027%</b> </td>
<td align="center"> <b>93.2917%</b> </td>
<td align="center"> <b>87.8749%</b> </td>
</tr>
</table>
## **Citing**
If our content is helpful for your research work, please quote it in the paper.
## **Disclaim**
The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment. **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## **Acknowledgment**
`bert-ancient-chinese` is based on [bert-base-chinese](https://huggingface.co/bert-base-chinese) to continue training.
Thanks to Prof. [Xipeng Qiu](https://xpqiu.github.io/) and the [Natural Language Processing Laboratory of Fudan University](https://nlp.fudan.edu.cn/).
## **Contact us**
Pengyu Wang:wpyjihuai@gmail.com
| 7,446 | [
[
-0.040557861328125,
-0.0643310546875,
0.006687164306640625,
0.006259918212890625,
-0.03662109375,
-0.026519775390625,
-0.03875732421875,
-0.0517578125,
0.011962890625,
0.00690460205078125,
-0.028533935546875,
-0.03955078125,
-0.03759765625,
-0.00098133087158... |
Abdullah17/whisper-small-urdu | 2022-12-13T09:28:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | Abdullah17 | null | null | Abdullah17/whisper-small-urdu | 4 | 529 | transformers | 2022-11-22T15:18:06 | ---
language:
- ur
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small UR
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: ur, split: test'
metrics:
- name: Wer
type: wer
value: 41.698656429942424
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small UR - Muhammad Abdullah
This model is a fine-tuned version of [openai/whisper-Small](https://huggingface.co/openai/whisper-large) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9758
- Wer: 41.6987
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0074 | 9.62 | 1000 | 0.8238 | 42.0345 |
| 0.0003 | 19.23 | 2000 | 0.9381 | 42.6583 |
| 0.0002 | 28.85 | 3000 | 0.9758 | 41.6987 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| 1,753 | [
[
-0.035675048828125,
-0.040374755859375,
0.007602691650390625,
0.0080718994140625,
-0.023345947265625,
-0.039703369140625,
-0.026519775390625,
-0.0281219482421875,
0.0212860107421875,
0.0245819091796875,
-0.05322265625,
-0.040740966796875,
-0.044158935546875,
... |
keremberke/yolov5s-construction-safety | 2022-12-30T20:48:25.000Z | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/construction-safety-object-detection",
"model-index",
"has_space",
"region:us"
] | object-detection | keremberke | null | null | keremberke/yolov5s-construction-safety | 2 | 529 | yolov5 | 2022-12-29T21:36:32 |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/construction-safety-object-detection
model-index:
- name: keremberke/yolov5s-construction-safety
results:
- task:
type: object-detection
dataset:
type: keremberke/construction-safety-object-detection
name: keremberke/construction-safety-object-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.3947243485213459 # min: 0.0 - max: 1.0
name: mAP@0.5
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-construction-safety" src="https://huggingface.co/keremberke/yolov5s-construction-safety/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-construction-safety')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-construction-safety --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** | 2,130 | [
[
-0.0458984375,
-0.04400634765625,
0.039276123046875,
-0.025238037109375,
-0.0248260498046875,
-0.0271453857421875,
0.0181884765625,
-0.0384521484375,
0.0008983612060546875,
0.0238037109375,
-0.042327880859375,
-0.06640625,
-0.043670654296875,
-0.011451721191... |
timm/eva02_large_patch14_448.mim_in22k_ft_in22k | 2023-03-31T05:45:59.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/eva02_large_patch14_448.mim_in22k_ft_in22k | 0 | 529 | timm | 2023-03-31T04:32:38 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-22k
- imagenet-22k
---
# Model card for eva02_large_patch14_448.mim_in22k_ft_in22k
An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 326.4
- GMACs: 362.4
- Activations (M): 690.0
- Image size: 448 x 448
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_large_patch14_448.mim_in22k_ft_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_large_patch14_448.mim_in22k_ft_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 5,410 | [
[
-0.044677734375,
-0.02978515625,
0.01326751708984375,
0.00811004638671875,
-0.0166015625,
0.0018291473388671875,
-0.00951385498046875,
-0.033721923828125,
0.039703369140625,
0.02752685546875,
-0.0335693359375,
-0.051513671875,
-0.043182373046875,
0.006797790... |
Kha37lid/khalid | 2023-08-04T02:28:18.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Kha37lid | null | null | Kha37lid/khalid | 0 | 529 | diffusers | 2023-08-04T02:15:24 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Khalid Dreambooth model trained by Kha37lid with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 496 | [
[
-0.029510498046875,
-0.051422119140625,
0.043426513671875,
0.032958984375,
-0.0194091796875,
0.039031982421875,
0.0240936279296875,
-0.0169219970703125,
0.0416259765625,
0.005832672119140625,
-0.0172576904296875,
-0.0234527587890625,
-0.035614013671875,
-0.0... |
nlp-waseda/roberta-large-japanese | 2022-10-21T14:48:46.000Z | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | nlp-waseda | null | null | nlp-waseda/roberta-large-japanese | 19 | 528 | transformers | 2022-05-10T08:37:48 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
---
# nlp-waseda/roberta-large-japanese
## Model description
This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese")
sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
`BertJapaneseTokenizer` now supports automatic `JumanppTokenizer` and `SentencepieceTokenizer`. You can use [this model](https://huggingface.co/nlp-waseda/roberta-large-japanese-with-auto-jumanpp) without any data preprocessing.
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took two weeks using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 6e-5
- per_device_train_batch_size: 103
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 5
- total_train_batch_size: 4120
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6
- lr_scheduler_type: linear
- training_steps: 670000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
| 2,276 | [
[
-0.034942626953125,
-0.0684814453125,
0.0185089111328125,
0.018402099609375,
-0.038360595703125,
0.0013437271118164062,
-0.039398193359375,
-0.0256195068359375,
0.035888671875,
0.045562744140625,
-0.05810546875,
-0.036468505859375,
-0.049285888671875,
0.0063... |
keremberke/yolov5s-garbage | 2023-01-05T13:01:38.000Z | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/garbage-object-detection",
"model-index",
"has_space",
"region:us"
] | object-detection | keremberke | null | null | keremberke/yolov5s-garbage | 1 | 528 | yolov5 | 2023-01-05T13:01:00 |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
datasets:
- keremberke/garbage-object-detection
model-index:
- name: keremberke/yolov5s-garbage
results:
- task:
type: object-detection
dataset:
type: keremberke/garbage-object-detection
name: keremberke/garbage-object-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.38002559554830656 # min: 0.0 - max: 1.0
name: mAP@0.5
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-garbage" src="https://huggingface.co/keremberke/yolov5s-garbage/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-garbage')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-garbage --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
| 2,036 | [
[
-0.057281494140625,
-0.03399658203125,
0.041900634765625,
-0.037933349609375,
-0.0228424072265625,
-0.03094482421875,
0.00896453857421875,
-0.038299560546875,
0.003631591796875,
0.026611328125,
-0.047821044921875,
-0.063232421875,
-0.041534423828125,
-0.0098... |
zjunlp/MolGen-large | 2023-06-23T12:46:58.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"molecular language model",
"SELFIES",
"molecule generation",
"arxiv:2301.11259",
"autotrain_compatible",
"has_space",
"region:us"
] | text2text-generation | zjunlp | null | null | zjunlp/MolGen-large | 5 | 528 | transformers | 2023-01-31T12:04:56 | ---
tags:
- molecular language model
- SELFIES
- molecule generation
widget:
- text: '[C][=C][C][=C][C][=C][Ring1][=Branch1]'
inference: false
---
# MolGen-large
MolGen-large was introduced in the paper ["Domain-Agnostic Molecular Generation with Self-feedback"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen). It is a pre-trained molecular generative model built using the 100\% robust molecular language representation, SELFIES.
## Model description
MolGen-large is the first pre-trained model that only produces chemically valid molecules.
With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms.
Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder.
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large can generate molecules with desired properties, making it a valuable tool for molecular optimization.

## Intended uses
You can use the raw model for molecule generation or fine-tune it to a downstream task. Please take note that the following examples only demonstrate the utilization of our pre-trained model for molecule generation. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you.
### How to use
Molecule generation example:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large")
>>> sf_input = tokenizer("[C][=C][C][=C][C][=C][Ring1][=Branch1]", return_tensors="pt")
>>> # beam search
>>> molecules = model.generate(input_ids=sf_input["input_ids"],
attention_mask=sf_input["attention_mask"],
max_length=15,
min_length=5,
num_return_sequences=5,
num_beams=5)
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules]
['[C][=C][C][=C][C][=C][Ring1][=Branch1]',
'[C][=C][C][=C][C][=C][C][=C][Ring1][=Branch1]',
'[C][=C][C][=C][C][=C][Ring1][=Branch1][C][=C][C][=C]',
'[C][=C][C][=C][C][=C][Ring1][=Branch1][C@H1][C][=C][C]',
'[C][=C][C][=C][C][=C][Ring1][=Branch1][C@H1][=C][C][=C]']
```
### BibTeX entry and citation info
```bibtex
@article{fang2023molecular,
title={Molecular Language Model as Multi-task Generator},
author={Fang, Yin and Zhang, Ningyu and Chen, Zhuo and Fan, Xiaohui and Chen, Huajun},
journal={arXiv preprint arXiv:2301.11259},
year={2023}
}
``` | 2,888 | [
[
-0.0231781005859375,
-0.047637939453125,
0.025482177734375,
-0.0012645721435546875,
-0.0227813720703125,
-0.0002892017364501953,
-0.004322052001953125,
-0.0094757080078125,
0.0205230712890625,
0.0230560302734375,
-0.044586181640625,
-0.039306640625,
-0.040740966... |
Gregor/mblip-mt0-xl | 2023-09-21T13:58:20.000Z | [
"transformers",
"pytorch",
"blip-2",
"vision",
"image-to-text",
"image-captioning",
"visual-question-answering",
"en",
"multilingual",
"dataset:Gregor/mblip-train",
"arxiv:2307.06930",
"arxiv:2301.12597",
"license:mit",
"region:us"
] | image-to-text | Gregor | null | null | Gregor/mblip-mt0-xl | 8 | 528 | transformers | 2023-07-10T15:02:45 | ---
language:
- en
- multilingual
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
datasets:
- Gregor/mblip-train
---
# mBLIP mT0-XL
This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930).
## Model description
mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (Q-Former) and a large language model (LLM).
The Q-Former and ViT have both been initialized by an English BLIP-2 checkpoint ([blip2-flan-t5-xl](https://huggingface.co/Gregor/mblip-mt0-xl)) and then re-aligned
to the multilingual LLM ([mt0-xl](https://huggingface.co/bigscience/mt0-xl)) using a [multilingual task mixture](https://huggingface.co/datasets/Gregor/mblip-train).
<img src="https://github.com/gregor-ge/mBLIP/blob/main/architecture.png"
alt="The mBLIP architecture" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
in 96 languages.
#### Languages
mBLIP was trained on the following 96 languages:
`
af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu
`
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and prompt text in a zero-shot setup or
alternatively finetune it for downstream applications.
We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss.
See [our repository](https://github.com/gregor-ge/mBLIP) for the code used to train and finetune this model.
## Bias, Risks, Limitations, and Ethical Considerations
While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high-resource languages
like English, German, Spanish, etc.
mBLIP inherits the risk, limitations, and biases from the models used to initialize it.
mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the BLIP-2 [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, Blip2ForConditionalGeneration
processor = BlipProcessor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`bfloat16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", torch_dtype=torch.bfloat16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
>**Important:** Paper results only use int8 for the LLM weights while this loads all weights in int8.
> We see that this gives slightly worse results but currently int8 for some model parts is not supported by HuggingFace.
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
## Citation
If you use our model, please cite the following:
```
@article{geigle2023mblip,
author = {Gregor Geigle and
Abhay Jain and
Radu Timofte and
Goran Glava\v{s}},
title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs},
journal = {arXiv},
volume = {abs/2307.06930},
year = {2023},
url = {https://arxiv.org/abs/2307.06930},
eprinttype = {arXiv},
eprint = {2307.06930},
}
``` | 6,750 | [
[
-0.0260467529296875,
-0.044921875,
0.01220703125,
0.0308837890625,
-0.0180511474609375,
-0.01018524169921875,
-0.024444580078125,
-0.05157470703125,
-0.0035228729248046875,
0.0167083740234375,
-0.031463623046875,
-0.0262908935546875,
-0.039337158203125,
0.01... |
vita-group/vicuna-7b-v1.3_gptq | 2023-09-03T14:39:46.000Z | [
"transformers",
"llama",
"text-generation",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | vita-group | null | null | vita-group/vicuna-7b-v1.3_gptq | 0 | 528 | transformers | 2023-09-02T05:19:59 | ---
license: mit
---
# Compressed LLM Model Zone
The models are prepared by [Visual Informatics Group @ University of Texas at Austin (VITA-group)](https://vita-group.github.io/). Credits to Ajay Jaiswal, Zhenyu Zhang, Zhangheng Li, Lu Yin, Shiwei Liu and Junyuan Hong.
License: [MIT License](https://opensource.org/license/mit/)
Setup environment
```shell
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
pip install transformers==4.31.0
pip install accelerate
pip install auto-gptq # for gptq
pip install sentencepiece
```
How to use pruned models
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'llama-2-7b'
comp_method = 'magnitude_unstructured'
comp_degree = 0.2
model_path = f'vita-group/{base_model}_{comp_method}'
model = AutoModelForCausalLM.from_pretrained(
model_path,
revision=f's{comp_degree}',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.cuda()
outputs = model.generate(input_ids, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
How to use wanda+gptq models
```python
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_path = 'vita-group/llama-2-7b_wanda_2_4_gptq_4bit_128g'
tokenizer_path = 'meta-llama/Llama-2-7b-hf'
model = AutoGPTQForCausalLM.from_quantized(
model_path,
# inject_fused_attention=False, # or
disable_exllama=True,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, trust_remote_code=True)
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.to('cuda')
outputs = model.generate(input_ids=input_ids, max_length=128)
tokenizer.decode(outputs[0])
```
How to use gptq models
```python
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
# model_path = 'vita-group/llama-2-7b_wanda_2_4_gptq_4bit_128g'
# tokenizer_path = 'meta-llama/Llama-2-7b-hf'
model_path = 'vita-group/vicuna-7b-v1.3_gptq'
tokenizer_path = 'lmsys/vicuna-7b-v1.3'
model = AutoGPTQForCausalLM.from_quantized(
model_path,
# inject_fused_attention=False, # or
disable_exllama=True,
device_map='auto',
revision='2bit_128g',
)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, trust_remote_code=True)
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.to('cuda')
outputs = model.generate(input_ids=input_ids, max_length=128)
tokenizer.decode(outputs[0])
```
| | Base Model | Model Size | Compression Method | Compression Degree |
|---:|:-------------|:-------------|:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| 0 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.1) |
| 1 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.2) |
| 2 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.3) |
| 3 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.5) |
| 4 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.6) |
| 5 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.1) |
| 6 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.2) |
| 7 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.3) |
| 8 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.5) |
| 9 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.6) |
| 10 | Llama-2 | 7b | [wanda_gptq](https://huggingface.co/vita-group/llama-2-7b_wanda_2_4_gptq_4bit_128g) | 4bit_128g |
| 11 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.1) |
| 12 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.2) |
| 13 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.3) |
| 14 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.5) |
| 15 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.6) |
| 16 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [10bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/10bit_128g) |
| 17 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [12bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/12bit_128g) |
| 18 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [14bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/14bit_128g) |
| 19 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [2bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/2bit_128g) |
| 20 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [3bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/3bit_128g) |
| 21 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [4bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/4bit_128g) |
| 22 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [6bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/6bit_128g) |
| 23 | vicuna-v1.3 | 13b | [gptq](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq) | [8bit_128g](https://huggingface.co/vita-group/vicuna-13b-v1.3_gptq/tree/8bit_128g) |
| 24 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [10bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/10bit_128g) |
| 25 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [12bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/12bit_128g) |
| 26 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [14bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/14bit_128g) |
| 27 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [2bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/2bit_128g) |
| 28 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [3bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/3bit_128g) |
| 29 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [4bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/4bit_128g) |
| 30 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [6bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/6bit_128g) |
| 31 | vicuna-v1.3 | 7b | [gptq](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq) | [8bit_128g](https://huggingface.co/vita-group/vicuna-7b-v1.3_gptq/tree/8bit_128g) |
| 10,376 | [
[
-0.0250701904296875,
-0.035369873046875,
0.01018524169921875,
0.039093017578125,
-0.04156494140625,
0.0099639892578125,
0.0185699462890625,
-0.0264129638671875,
0.050018310546875,
0.0024318695068359375,
-0.046783447265625,
-0.042205810546875,
-0.05548095703125,
... |
unicamp-dl/translation-en-pt-t5 | 2021-10-11T03:47:21.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"en",
"pt",
"dataset:EMEA",
"dataset:ParaCrawl 99k",
"dataset:CAPES",
"dataset:Scielo",
"dataset:JRC-Acquis",
"dataset:Biomedical Domain Corpora",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"tex... | translation | unicamp-dl | null | null | unicamp-dl/translation-en-pt-t5 | 10 | 527 | transformers | 2022-03-02T23:29:05 | ---
language:
- en
- pt
datasets:
- EMEA
- ParaCrawl 99k
- CAPES
- Scielo
- JRC-Acquis
- Biomedical Domain Corpora
tags:
- translation
metrics:
- bleu
---
# Introduction
This repository brings an implementation of T5 for translation in EN-PT tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)!
# Usage
Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5.
You can also create a pipeline for it. An example with the phrase "I like to eat rice" is:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-en-pt-t5")
model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-en-pt-t5")
enpt_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
enpt_pipeline("translate English to Portuguese: I like to eat rice.")
```
# Citation
```bibtex
@inproceedings{lopes-etal-2020-lite,
title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation",
author = "Lopes, Alexandre and
Nogueira, Rodrigo and
Lotufo, Roberto and
Pedrini, Helio",
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.wmt-1.90",
pages = "833--840",
}
``` | 1,809 | [
[
-0.002674102783203125,
-0.02099609375,
0.0202178955078125,
0.0253143310546875,
-0.0406494140625,
-0.00641632080078125,
-0.0278167724609375,
-0.0293121337890625,
0.01459503173828125,
0.034698486328125,
-0.0239715576171875,
-0.03680419921875,
-0.0654296875,
0.... |
ncfrey/ChemGPT-4.7M | 2022-06-15T15:17:11.000Z | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"endpoints_compatible",
"region:us"
] | text-generation | ncfrey | null | null | ncfrey/ChemGPT-4.7M | 10 | 527 | transformers | 2022-05-11T19:54:55 | ---
tags:
- chemistry
---
# ChemGPT 4.7M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
| 1,939 | [
[
-0.0203399658203125,
-0.0168609619140625,
0.04803466796875,
-0.0027751922607421875,
0.008697509765625,
0.0006999969482421875,
-0.0099639892578125,
-0.026214599609375,
0.00769805908203125,
0.0173492431640625,
-0.05267333984375,
-0.0303955078125,
-0.02810668945312... |
facebook/unit_hifigan_HK_layer12.km2500_frame_TAT-TTS | 2022-10-19T14:27:14.000Z | [
"fairseq",
"audio",
"text-to-speech",
"hk",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | facebook | null | null | facebook/unit_hifigan_HK_layer12.km2500_frame_TAT-TTS | 1 | 527 | fairseq | 2022-10-08T01:34:38 | ---
license: cc-by-nc-4.0
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: hk
---
## unit_hifigan_HK_layer12.km2500_frame_TAT-TTS
Hokkien unit HiFiGAN based vocoder from fairseq:
- Trained with [TAT-TTS](https://sites.google.com/speech.ntut.edu.tw/fsw/home/tat-tts-corpus) data with 4 speakers in Taiwanese Hokkien accent. See [here]( https://research.facebook.com/publications/hokkien-direct-speech-to-speech-translation)
for more training details.
## Usage
```python
import json
import os
from pathlib import Path
import IPython.display as ipd
from fairseq import hub_utils
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech import CodeHiFiGANVocoder
from fairseq.models.text_to_speech.hub_interface import VocoderHubInterface
from huggingface_hub import snapshot_download
import torchaudio
cache_dir = os.getenv("HUGGINGFACE_HUB_CACHE")
# speech synthesis
library_name = "fairseq"
cache_dir = (
cache_dir or (Path.home() / ".cache" / library_name).as_posix()
)
cache_dir = snapshot_download(
f"facebook/unit_hifigan_HK_layer12.km2500_frame_TAT-TTS", cache_dir=cache_dir, library_name=library_name
)
x = hub_utils.from_pretrained(
cache_dir,
"model.pt",
".",
archive_map=CodeHiFiGANVocoder.hub_models(),
config_yaml="config.json",
fp16=False,
is_vocoder=True,
)
with open(f"{x['args']['data']}/config.json") as f:
vocoder_cfg = json.load(f)
assert (
len(x["args"]["model_path"]) == 1
), "Too many vocoder models in the input"
vocoder = CodeHiFiGANVocoder(x["args"]["model_path"][0], vocoder_cfg)
tts_model = VocoderHubInterface(vocoder_cfg, vocoder)
tts_sample = tts_model.get_model_input(unit)
wav, sr = tts_model.get_prediction(tts_sample)
ipd.Audio(wav, rate=sr)
``` | 1,918 | [
[
-0.034759521484375,
-0.05755615234375,
0.0074005126953125,
0.025726318359375,
-0.010162353515625,
-0.007549285888671875,
-0.0187530517578125,
-0.00975799560546875,
-0.00243377685546875,
0.042633056640625,
-0.033203125,
-0.04241943359375,
-0.039093017578125,
... |
raphaeldoan/raphaeldo | 2023-10-17T14:24:07.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:raphaeldoan/raphael_data",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | raphaeldoan | null | null | raphaeldoan/raphaeldo | 1 | 527 | diffusers | 2023-01-20T08:46:18 |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: raphaeldo
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
datasets:
- raphaeldoan/raphael_data
---
# LoRA DreamBooth - raphaeldoan/raphaeldo
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
raphaeldo
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'raphaeldoan/raphaeldo',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic raphaeldo jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
| 1,817 | [
[
-0.01538848876953125,
-0.0213165283203125,
0.0254974365234375,
0.01534271240234375,
-0.0279083251953125,
0.002895355224609375,
0.01239776611328125,
-0.015838623046875,
0.047454833984375,
0.03662109375,
-0.04388427734375,
-0.027496337890625,
-0.0626220703125,
... |
timm/regnetz_d8.ra3_in1k | 2023-03-22T07:17:16.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.06877",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/regnetz_d8.ra3_in1k | 0 | 527 | timm | 2023-03-22T07:16:52 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for regnetz_d8.ra3_in1k
A RegNetZ image classification model. Trained on ImageNet-1k by Ross Wightman in `timm`.
These RegNetZ B / C / D models explore different group size and layer configurations and did not follow any paper descriptions. Like EfficientNets, this architecture uses linear (non activated) block outputs and an inverted-bottleneck (mid block expansion).
* B16 : ~1.5GF @ 256x256 with a group-width of 16. Single layer stem.
* C16 : ~2.5GF @ 256x256 with a group-width of 16. Single layer stem.
* D32 : ~6GF @ 256x256 with a group-width of 32. Tiered 3-layer stem, no pooling.
* D8 : ~4GF @ 256x256 with a group-width of 8. Tiered 3-layer stem, no pooling.
* E8 : ~10GF @ 256x256 with a group-width of 8. Tiered 3-layer stem, no pooling.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 23.4
- GMACs: 4.0
- Activations (M): 23.7
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- Fast and Accurate Model Scaling: https://arxiv.org/abs/2103.06877
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnetz_d8.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetz_d8.ra3_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 128, 128])
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 1792, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetz_d8.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1792, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@InProceedings{Dollar2021,
title = {Fast and Accurate Model Scaling},
author = {Piotr Doll{'a}r and Mannat Singh and Ross Girshick},
booktitle = {CVPR},
year = {2021}
}
```
| 16,262 | [
[
-0.056549072265625,
-0.020111083984375,
-0.0130767822265625,
0.034759521484375,
-0.0306854248046875,
-0.00833892822265625,
-0.00927734375,
-0.040985107421875,
0.0660400390625,
0.00872802734375,
-0.05078125,
-0.03863525390625,
-0.048553466796875,
0.0018453598... |
edwardjross/xlm-roberta-base-finetuned-recipe-all | 2022-04-09T13:19:55.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"arxiv:2004.12184",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | token-classification | edwardjross | null | null | edwardjross/xlm-roberta-base-finetuned-recipe-all | 9 | 526 | transformers | 2022-04-08T14:01:31 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-all
results: []
widget:
- text: "1 sheet of frozen puff pastry (thawed)"
- text: "1/2 teaspoon fresh thyme, minced"
- text: "2-3 medium tomatoes"
- text: "1 petit oignon rouge"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the recipe ingredient [NER dataset](https://github.com/cosylabiiit/recipe-knowledge-mining) from the paper [A Named Entity Based Approach to Model Recipes](https://arxiv.org/abs/2004.12184) (using both the `gk` and `ar` datasets).
It achieves the following results on the evaluation set:
- Loss: 0.1169
- F1: 0.9672
On the test set it obtains an F1 of 0.9615, slightly above the CRF used in the paper.
## Model description
Predicts tag of each token in an ingredient string.
| Tag | Significance | Example |
| --- | --- | --- |
| NAME | Name of Ingredient | salt, pepper |
| STATE | Processing State of Ingredient. | ground, thawed |
| UNIT | Measuring unit(s). | gram, cup |
| QUANTITY | Quantity associated with the unit(s). | 1, 1 1/2 , 2-4 |
| SIZE | Portion sizes mentioned. | small, large |
| TEMP | Temperature applied prior to cooking. | hot, frozen |
| DF (DRY/FRESH) | Fresh otherwise as mentioned. | dry, fresh |
## Intended uses & limitations
* Only trained on ingredient strings.
* Tags subtokens; tag should be propagated to whole word
* Works best with pre-tokenisation splitting of symbols (such as parentheses) and numbers (e.g. 50g -> 50 g)
* Typically only detects the first ingredient if there are multiple.
* Only trained on two American English data sources
* Tags TEMP and DF have very few training data.
## Training and evaluation data
Both the `ar` (AllRecipes.com) and `gk` (FOOD.com) datasets obtained from the TSVs from the authors' [repository](https://github.com/cosylabiiit/recipe-knowledge-mining).
## Training procedure
It follows the overall procedure from Chapter 4 of [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/) by Tunstall, von Wera and Wolf.
See the [training notebook](https://github.com/EdwardJRoss/nlp_transformers_exercises/blob/master/notebooks/ch4-ner-recipe-stanford-crf.ipynb) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2529 | 1.0 | 331 | 0.1303 | 0.9592 |
| 0.1164 | 2.0 | 662 | 0.1224 | 0.9640 |
| 0.0904 | 3.0 | 993 | 0.1156 | 0.9671 |
| 0.0585 | 4.0 | 1324 | 0.1169 | 0.9672 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
| 3,323 | [
[
-0.0308837890625,
-0.04522705078125,
0.0190582275390625,
-0.00525665283203125,
0.0029735565185546875,
-0.02532958984375,
-0.0158233642578125,
-0.020355224609375,
0.01320648193359375,
0.04949951171875,
-0.054901123046875,
-0.050750732421875,
-0.05792236328125,
... |
guidecare/all-mpnet-base-v2-feature-extraction | 2023-06-14T23:50:49.000Z | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | guidecare | null | null | guidecare/all-mpnet-base-v2-feature-extraction | 0 | 526 | sentence-transformers | 2022-06-23T20:11:48 | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-mpnet-base-v2 clone
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The only difference between this model and the official one is that the `pipeline_tag: feature-extraction` was changed inside this README.md.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | | 10,286 | [
[
-0.0269317626953125,
-0.0555419921875,
0.0250091552734375,
0.015106201171875,
-0.010528564453125,
-0.0233001708984375,
-0.0184326171875,
-0.01593017578125,
0.026123046875,
0.01690673828125,
-0.03167724609375,
-0.0367431640625,
-0.05596923828125,
0.0062408447... |
timm/resnet26.bt_in1k | 2023-04-05T18:04:51.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/resnet26.bt_in1k | 0 | 526 | timm | 2023-04-05T18:04:40 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for resnet26.bt_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Bag-of-Tricks recipe.
* SGD (w/ Nesterov) optimizer
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 16.0
- GMACs: 2.4
- Activations (M): 7.4
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet26.bt_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet26.bt_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet26.bt_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
| 38,487 | [
[
-0.06536865234375,
-0.017059326171875,
0.0016880035400390625,
0.0292816162109375,
-0.032684326171875,
-0.00860595703125,
-0.0099639892578125,
-0.02984619140625,
0.086181640625,
0.0204925537109375,
-0.049530029296875,
-0.0400390625,
-0.0460205078125,
-0.00164... |
bullhug/ClearVAE | 2023-06-03T12:49:54.000Z | [
"diffusers",
"vae",
"en",
"diffusers:AutoencoderKL",
"region:us"
] | null | bullhug | null | null | bullhug/ClearVAE | 0 | 526 | diffusers | 2023-05-25T02:33:23 | ---
language:
- en
tags:
- vae
library_name: diffusers
---
Diffusers format of ClearVAE_V2.3.safetensors from https://civitai.com/models/22354 | 142 | [
[
-0.008636474609375,
-0.005435943603515625,
0.0303802490234375,
0.0289764404296875,
-0.0257720947265625,
-0.034088134765625,
0.04248046875,
-0.0032253265380859375,
0.008941650390625,
0.0438232421875,
-0.01519012451171875,
-0.0008282661437988281,
-0.02740478515625... |
artificialguybr/CuteCartoonRedmond | 2023-10-07T20:09:25.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | artificialguybr | null | null | artificialguybr/CuteCartoonRedmond | 3 | 526 | diffusers | 2023-08-26T01:03:07 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ExCRedmAF
widget:
- text: ExCRedmAF
---
# CuteCartoon.Redmond

CuteCartoon.Redmond is here!
DOWNLOAD V2 HERE: https://huggingface.co/artificialguybr/CuteCartoonRedmond-V2
TEST ALL MY LORA HERE: https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora
Introducing CuteCartoon.Redmond, the ultimate LORA for creating funny cute images of characters!
I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI.
It is based on SD XL 1.0 and fine-tuned on a large dataset.
The LORA has a high capacity to generate funny cute images of characters!
The tag for the model:ExCRedmAF
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Patreon:
https://www.patreon.com/user?u=81570187
Ko-fi:https://ko-fi.com/artificialguybr
BuyMeACoffe:https://www.buymeacoffee.com/jvkape
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ | 1,261 | [
[
-0.039459228515625,
-0.07489013671875,
0.0248260498046875,
0.035675048828125,
-0.053741455078125,
-0.0155029296875,
-0.00031685829162597656,
-0.06866455078125,
0.08331298828125,
0.035736083984375,
-0.041107177734375,
-0.038909912109375,
-0.017578125,
0.00143... |
Sedigh/RoBERTa-large-PM-M3-Voc | 2022-07-06T09:22:41.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:cc",
"endpoints_compatible",
"region:us"
] | text-classification | Sedigh | null | null | Sedigh/RoBERTa-large-PM-M3-Voc | 1 | 525 | transformers | 2022-07-05T09:12:39 | ---
license: cc
---
Pytorch fine-tuned bio-lm RoBERTa-large model to detect COVID immunization mentions in Australian-based emergency departments.
Model from Facebook's [Biomedical and Clinical Language Models](https://github.com/facebookresearch/bio-lm).
```
@inproceedings{lewis-etal-2020-pretrained,
title = "Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art",
author = "Lewis, Patrick and
Ott, Myle and
Du, Jingfei and
Stoyanov, Veselin",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.17",
pages = "146--157",
}
``` | 842 | [
[
0.0038852691650390625,
-0.06951904296875,
0.04888916015625,
-0.0032806396484375,
-0.01557159423828125,
-0.00954437255859375,
-0.0217132568359375,
-0.051116943359375,
0.034088134765625,
0.00875091552734375,
-0.03826904296875,
-0.058135986328125,
-0.04367065429687... |
OpenAssistant/reward-model-deberta-v3-large | 2023-02-17T01:36:23.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"reward-model",
"reward_model",
"RLHF",
"en",
"dataset:openai/summarize_from_feedback",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"license:mit",
"endpoints_compatible",
"region:... | text-classification | OpenAssistant | null | null | OpenAssistant/reward-model-deberta-v3-large | 16 | 525 | transformers | 2023-01-15T11:23:08 | ---
license: mit
datasets:
- openai/summarize_from_feedback
- openai/webgpt_comparisons
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
metrics:
- accuracy
tags:
- reward-model
- reward_model
- RLHF
---
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
# How to use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | 99.94 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer. | 2,480 | [
[
-0.04522705078125,
-0.033355712890625,
0.0133056640625,
0.006305694580078125,
-0.00914764404296875,
-0.005035400390625,
0.00911712646484375,
-0.023590087890625,
0.0074462890625,
0.0103912353515625,
-0.051055908203125,
-0.0164337158203125,
-0.036773681640625,
... |
timm/mixer_l16_224.goog_in21k_ft_in1k | 2023-03-27T23:11:31.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2105.01601",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/mixer_l16_224.goog_in21k_ft_in1k | 0 | 525 | timm | 2023-03-27T23:08:37 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for mixer_l16_224.goog_in21k_ft_in1k
A MLP-Mixer image classification model. Pretrained on ImageNet-21k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 208.2
- GMACs: 44.6
- Activations (M): 41.7
- Image size: 224 x 224
- **Papers:**
- MLP-Mixer: An all-MLP Architecture for Vision: https://arxiv.org/abs/2105.01601
- **Original:** https://github.com/google-research/vision_transformers
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mixer_l16_224.goog_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mixer_l16_224.goog_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{tolstikhin2021mixer,
title={MLP-Mixer: An all-MLP Architecture for Vision},
author={Tolstikhin, Ilya and Houlsby, Neil and Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Unterthiner, Thomas and Yung, Jessica and Steiner, Andreas and Keysers, Daniel and Uszkoreit, Jakob and Lucic, Mario and Dosovitskiy, Alexey},
journal={arXiv preprint arXiv:2105.01601},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 3,278 | [
[
-0.0521240234375,
-0.0243682861328125,
0.0133819580078125,
0.017120361328125,
-0.020721435546875,
-0.0303955078125,
-0.00867462158203125,
-0.03387451171875,
0.0220184326171875,
0.0233306884765625,
-0.0440673828125,
-0.038818359375,
-0.05670166015625,
-0.0133... |
TheBloke/LLaMa-7B-GGML | 2023-07-15T18:15:35.000Z | [
"transformers",
"llama",
"license:other",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/LLaMa-7B-GGML | 56 | 525 | transformers | 2023-05-17T12:59:21 | ---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 7b GGML
These files are GGML format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB| 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB| 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB| 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB| 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB| 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB| 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB| 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB| 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 7b
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
| 10,475 | [
[
-0.03863525390625,
-0.06903076171875,
0.03387451171875,
0.0219573974609375,
-0.026519775390625,
-0.001781463623046875,
-0.00859832763671875,
-0.04254150390625,
0.01458740234375,
0.01372528076171875,
-0.037872314453125,
-0.043121337890625,
-0.03839111328125,
... |
sail-rvc/Taylor_Swift__RVC_v1_ | 2023-07-14T07:32:53.000Z | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | sail-rvc | null | null | sail-rvc/Taylor_Swift__RVC_v1_ | 0 | 525 | transformers | 2023-07-14T07:32:40 |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Taylor_Swift__RVC_v1_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:32:52
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
| 389 | [
[
-0.028900146484375,
-0.030364990234375,
0.0125274658203125,
0.017364501953125,
-0.030364990234375,
0.005527496337890625,
0.01535797119140625,
-0.0052337646484375,
0.0304718017578125,
0.06689453125,
-0.06927490234375,
-0.04443359375,
-0.034027099609375,
-0.00... |
WisdomShell/CodeShell-7B-Chat-int4 | 2023-11-01T11:54:36.000Z | [
"transformers",
"pytorch",
"codeshell",
"text-generation",
"wisdomshell",
"pku-kcl",
"openbankai",
"custom_code",
"zh",
"en",
"has_space",
"region:us"
] | text-generation | WisdomShell | null | null | WisdomShell/CodeShell-7B-Chat-int4 | 17 | 525 | transformers | 2023-10-13T09:41:00 | ---
language:
- zh
- en
tags:
- codeshell
- wisdomshell
- pku-kcl
- openbankai
---
# CodeShell
CodeShell是[北京大学知识计算实验室](http://se.pku.edu.cn/kcl/)联合四川天府银行AI团队研发的多语言代码大模型基座。CodeShell具有70亿参数,在五千亿Tokens进行了训练,上下文窗口长度为8194。在权威的代码评估Benchmark(HumanEval与MBPP)上,CodeShell取得同等规模最好的性能。与此同时,我们提供了与CodeShell配套的部署方案与IDE插件,请参考代码库[CodeShell](https://github.com/WisdomShell/codeshell)。同时,为了方便中国用户下载,我们在modelscope中也上传了对应版本,国内用户可以访问[CodeShell-7B-Chat-int4国内地址](https://modelscope.cn/models/WisdomShell/CodeShell-7B-Chat-int4/summary)。本仓库为CodeShell-7B-Chat的Int4量化模型的仓库。
CodeShell is a multi-language code LLM developed by the [Knowledge Computing Lab](http://se.pku.edu.cn/kcl/) of Peking University. CodeShell has 7 billion parameters and was trained on 500 billion tokens with a context window length of 8194. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance of its scale. Meanwhile, we provide deployment solutions and IDE plugins that complement CodeShell. Please refer to the [CodeShell code repository](https://github.com/WisdomShell/codeshell) for more details. This repository is for the Int4 quantized model of CodeShell-7B-Chat.
## Main Characteristics of CodeShell
* **强大的性能**:CodelShell在HumanEval和MBPP上达到了7B代码基座大模型的最优性能
* **完整的体系**:除了代码大模型,同时开源IDE(VS Code与JetBrains)插件,形成开源的全栈技术体系
* **轻量化部署**:支持本地C++部署,提供轻量快速的本地化软件开发助手解决方案
* **全面的评测**:提供支持完整项目上下文、覆盖代码生成、代码缺陷检测与修复、测试用例生成等常见软件开发活动的多任务评测体系(即将开源)
* **高效的训练**:基于高效的数据治理体系,CodeShell在完全冷启动情况下,只训练了五千亿Token即获得了优异的性能
* **Powerful Performance**: CodeShell achieves optimal performance for a 7B code base model on HumanEval and MBPP.
* **Complete Ecosystem**: In addition to the mega code model, open-source IDE plugins (for VS Code and JetBrains) are also available, forming a comprehensive open-source full-stack technology system.
* **Lightweight Deployment**: Supports local C++ deployment, offering a lightweight and fast localized software development assistant solution.
* **Comprehensive Evaluation**: Provides a multi-task evaluation system that supports full project context, covering code generation, code defect detection and repair, test case generation, and other common software development activities (to be open-sourced soon).
* **Efficient Training**: Based on an efficient data governance system, CodeShell, even when starting from scratch, achieved outstanding performance with training on just 500 trillion tokens.
## Quickstart
CodeShell-7B-Chat量化版本 提供了Hugging Face格式的模型,开发者可以通过下列代码加载并使用。
CodeShell-7B-Chat-int4 offers a model in the Hugging Face format. Developers can load and use it with the following code.
```python
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device('cuda:0')
model = AutoModelForCausalLM.from_pretrained('WisdomShell/CodeShell-7B-Chat-int4', trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained('WisdomShell/CodeShell-7B-Chat-int4')
history = []
query = '你是谁?'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
query = '用Python写一个HTTP server'
response = model.chat(query, history, tokenizer)
print(response)
history.append((query, response))
```
开发者也可以通过VS Code与JetBrains插件与CodeShell-7B-Chat量化版本交互,详情请参考[VSCode插件仓库](https://github.com/WisdomShell/codeshell-vscode)与[IntelliJ插件仓库](https://github.com/WisdomShell/codeshell-intellij)。
Developers can also interact with CodeShell-7B-Chat-int4 through VS Code and JetBrains plugins. For details, please refer to the [VSCode Plugin Repository](https://github.com/WisdomShell/codeshell-vscode) and [IntelliJ Plugin Repository](https://github.com/WisdomShell/codeshell-intellij).
## Model Details
Code Shell使用GPT-2作为基础架构,采用Grouped-Query Attention、RoPE相对位置编码等技术。
Code Shell uses GPT-2 as its foundational architecture and incorporates technologies such as Grouped-Query Attention and RoPE relative position encoding.
| Hyper-parameter | Value |
|---|---|
| n_layer | 42 |
| n_embd | 4096 |
| n_inner | 16384 |
| n_head | 32 |
| num_query_groups | 8 |
| seq-length | 8192 |
| vocab_size | 70144 |
## Evaluation
我们选取了目前最流行的两个代码评测数据集(HumanEval与MBPP)对模型进行评估,与目前最先进的两个7b代码大模型CodeLllama与Starcoder相比,Codeshell 取得了最优的成绩。具体评测结果如下。
We selected the two most popular code evaluation datasets currently available (HumanEval and MBPP) to assess the model. Compared to the two most advanced 7b LLM for code, CodeLllama and Starcoder, Codeshell achieved the best results. The specific evaluation results are as follows.
### Pass@1
| 任务 | CodeShell-7b | CodeLlama-7b | Starcoder-7b |
| ------- | --------- | --------- | --------- |
| humaneval | **34.32** | 29.44 | 27.80 |
| mbpp | **38.65** | 37.60 | 34.16 |
| multiple-js | **33.17** | 31.30 | 27.02 |
| multiple-java | **30.43** | 29.24 | 24.30 |
| multiple-cpp | **28.21** | 27.33 | 23.04 |
| multiple-swift | 24.30 | **25.32** | 15.70 |
| multiple-php | **30.87** | 25.96 | 22.11 |
| multiple-d | 8.85 | **11.60** | 8.08 |
| multiple-jl | 22.08 | **25.28** | 22.96 |
| multiple-lua | 22.39 | **30.50** | 22.92 |
| multiple-r | **20.52** | 18.57 | 14.29 |
| multiple-rkt | **17.20** | 12.55 | 10.43 |
| multiple-rs | 24.55 | **25.90** | 22.82 |
# Statement
我们郑重声明,我们开发团队基于CodeShell模型开发了基于vscode和intellij的智能编码助手插件并均已开源。除此以外,无论是针对iOS、Android、HarmonyOS、Web,还是其他任何平台,我们的开发团队均未开发任何基于CodeShell模型的应用程序。我们强烈敦促所有用户不要利用CodeShell模型从事危害国家和社会安全或违法活动。同时,我们要求用户不要在未经适当的安全审查和备案的互联网服务中使用CodeShell模型。我们希望所有用户都能遵守这一原则,以确保在合规和合法的环境下发展科技。
尽管我们在确保模型训练过程中使用数据合规性方面已付出巨大努力,但由于模型和数据的复杂性,可能会出现难以预料的问题。因此,对于使用CodeShell开源模型导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误用、滥用、传播或不当利用等风险和问题,我们概不负责。
We hereby declare that our development team has developed intelligent coding assistant plugins for vscode and intellij based on the CodeShell model, both of which have been open-sourced. Beyond this, whether for iOS, Android, HarmonyOS, Web, or any other platform, our development team has not developed any applications based on the CodeShell model. We strongly urge all users not to use the CodeShell model for activities that endanger national and social security or are illegal. At the same time, we request users not to use the CodeShell model in internet services that have not undergone proper security reviews and registration. We hope all users will adhere to this principle to ensure the development of technology in a compliant and legal environment.
Despite our significant efforts to ensure compliance in the data used during the model training process, unforeseen issues may arise due to the complexity of the models and data. Therefore, we are not responsible for any issues arising from the use of the open-sourced CodeShell model, including but not limited to data security issues, public opinion risks, or risks and problems related to the model being misused, abused, disseminated, or exploited improperly.
# License
社区使用CodeShell模型需要遵循[CodeShell模型许可协议](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)及[Apache 2.0 许可证](https://www.apache.org/licenses/LICENSE-2.0)。CodeShell模型允许用于商业用途,但如果您计划将CodeShell模型或其派生产品用于商业用途,需要您确认主体符合以下条件:
1. 关联方的服务或产品的每日平均活跃用户数(DAU)原则上不能超过100万。
2. 关联方不得是面向个人用户的软件服务提供商或云服务提供商。
3. 关联方不存在将获得授予的商业许可,在未经许可的前提下将其再授权给其他第三方的可能性。
在满足上述条件的前提下,您需要通过向codeshell.opensource@gmail.com发送电子邮件,提交《CodeShell模型许可协议》要求的申请材料。经审核通过后,将授予您一个全球的、非排他的、不可转让的、不可再授权的商业版权许可。
Community use of the CodeShell model requires adherence to the ["CodeShell License Agreement"](https://huggingface.co/WisdomShell/CodeShell-7B/resolve/main/CodeShell%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) and the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). The CodeShell model is allowed for commercial use, but if you plan to use the CodeShell model or its derivatives for commercial purposes, you need to ensure that the entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. You and your affiliates must not be a software service provider or cloud service provider targeting individual users.
3. You and your affiliates should not have the possibility of sub-licensing to other third parties without obtaining the commercial license granted.
Under the aforementioned conditions, you need to submit the application materials required by the "CodeShell License Agreement" by sending an email to codeshell.opensource@gmail.com. After approval, you will be granted a global, non-exclusive, non-transferable, non-sublicensable commercial copyright license.
| 8,617 | [
[
-0.0260009765625,
-0.036773681640625,
0.0114898681640625,
0.022979736328125,
-0.0262603759765625,
0.007556915283203125,
-0.0215911865234375,
-0.04461669921875,
0.0157928466796875,
0.0330810546875,
-0.037841796875,
-0.06756591796875,
-0.039306640625,
0.013191... |
TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ | 2023-10-14T22:51:30.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"de",
"en",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ | 3 | 525 | transformers | 2023-10-14T22:05:50 | ---
base_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
inference: false
language:
- de
- en
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM 7B v1 Mistral
model_type: mistral
pipeline_tag: text-generation
prompt_template: "Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent\
\ gibt hilfreiche, detaillierte und h\xF6fliche Antworten. \nUser: {prompt} \nAssistant:\n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM 7B v1 Mistral - GPTQ
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM 7B v1 Mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
<!-- description start -->
## Description
This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM 7B v1 Mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GGUF)
* [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Sauerkraut
```
Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `SauerkrautLM-7B-v1-mistral-GPTQ`:
```shell
mkdir SauerkrautLM-7B-v1-mistral-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir SauerkrautLM-7B-v1-mistral-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir SauerkrautLM-7B-v1-mistral-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --local-dir SauerkrautLM-7B-v1-mistral-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-7B-v1-mistral-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/SauerkrautLM-7B-v1-mistral-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: VAGO solutions's SauerkrautLM 7B v1 Mistral

## VAGO solutions SauerkrautLM
Introducing SauerkrautLM-v1 - Your German Language Powerhouse!
We are thrilled to unveil our **very first release**, **SauerkrautLM-v1**. This remarkable creation marks a significant milestone as it is specifically **tailored for the German-speaking community**. In a landscape where German language models are scarce, we are proud to offer a solution that fills this void.
What sets SauerkrautLM-v1 apart is its versatility. Whether you are an individual looking to harness its capabilities for personal use or a business seeking to integrate it into your projects, our model is designed to accommodate all. It operates under the Apache 2.0 License, providing you with the freedom to explore its potential in both private and commercial applications.
Performance is at the heart of SauerkrautLM-v1. We put it to the **test using a customized version of MT-Bench for the German language**, and the results speak volumes. It currently stands as the most robust German Language Model on Hugging Face (based on german mt-bench results), showcasing its exceptional capabilities. Rest assured, this model is here to shine and set new standards. And the best thing is it comes in three different sizes (3B, 7B, 13B) to address your individual needs.
Our model's journey began with meticulous training using an **augmented dataset within the QLoRA approach**. This is just the beginning of our model series, promising even more innovative and powerful solutions in the future.
Join us on this exciting adventure as we redefine the possibilities of language modeling for the German-speaking world.
SauerkrautLM-v1 is here to empower your language-related endeavors like never before.
## All Models
| Model | HF | GPTQ | GGUF |
|-------|-------|-------|-------|
| SauerkrautLM-3b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1) | soon | soon |
| SauerkrautLM-7b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1) | soon | soon |
| SauerkrautLM-7b-v1-mistral | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | soon | soon |
| SauerkrautLM-13b-v1 | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1) | soon | soon |
## Model Details
**SauerkrautLM-7b-v1-mistral**
**Training Dataset:**
SauerkrautLM was trained with mix of German data augmentation and translated data.
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
**Training Procedure:**
SauerkrautLM-7b-v1-mistral was fine-tuned using QLoRA on 1 A100 80GB with Axolotl.
- **Trained by:** SauerkrautLM-v1 trained by VAGO solutions
- **Model Type:** SauerkrautLM-v1 is an auto-regressive language model based on the transformer architecture
- **Language(s):** German, English
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:golchinfar@vago-solutions.de)
**Prompt Template:**
```
Ein Chat zwischen einem Benutzer und einem KI-Assistenten. Der KI-Assistent gibt hilfreiche, detaillierte und höfliche Antworten.
User: {prompt}
Assistant:
```
## Evaluation
**[MT-Bench-TrueGerman](https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman)**





## MT-Bench-TrueGerman Response Example
**SauerkrautLM-7b-v1-mistral**
**Prompt**
> Stellen Sie sich vor, Sie verfassen einen Blogbeitrag, in dem Sie zwei beliebte Smartphone-Modelle vergleichen. Entwickeln Sie eine Gliederung für den Blogbeitrag, einschließlich der Hauptpunkte und Unterkapitel, um die Merkmale, die Leistung und die Benutzererfahrung der beiden Modelle effektiv zu vergleichen und gegenüberzustellen. Bitte antworten Sie in weniger als 200 Wörtern.", "Nehmen Sie Ihre vorherige Antwort und formulieren Sie sie als Limerick um.
**Antwort SauerkrautLM-7b-v1-mistral:**
```
Titel: Vergleich von Smartphones: iPhone 13 Pro Max vs. Galaxy S22 Ultra\n\nEinleitung\nIn diesem Blogbeitrag werden wir zwei beliebte Smartphone-Modelle, das iPhone 13 Pro Max und die Galaxy S22 Ultra, vergleichen. Diese beiden Geräte zeichnen sich durch innovative Merkmale, beeindruckende Leistung und eine überlegene Benutzererfahrung aus.\n\nTeil 1: Design\n- Das iPhone 13 Pro Max bietet ein modernes und elegantes Design mit einem Glas-Metallrahmen, während die Galaxy S22 Ultra ein minimalistischerer Ansatz mit einem Glas-Maske-Design verwendet.\n- Beide Geräte verfügen über eine hohe Bildschirmdichte, jedoch ist das AMOLED-Display der Galaxy S22 Ultra für eine bessere Farbtiefe und Helligkeit bekannt.\n- Die Größe der Bildschirme variiert zwischen den beiden Modellen, mit dem iPhone 13 Pro Max eine etwas kleinere 6,7 Zoll und der Galaxy S22 Ultra eine leistungsfähigere 6,5 Zoll.\n\nTeil 2: Leistung\n- Beide Geräte verfügen über hohe Leistungsmerkmale und bieten eine schnelle Leistung, aber die Entscheidungsfindung hängt von persönlichen Vorlieben und Anwendungsbereich ab.\n- Der neue A15 Bionic Chip im iPhone 13 Pro Max ist leistungsstärker als der Snapdragon 888 in der Galaxy S22 Ultra.\n- Die Akkulebenszeit variiert je nach Nutzung und Verbrauch, aber das iPhone 13 Pro Max hat im Vergleich zu seiner Vorgängerversion einen größeren Akku.\n\nTeil 3: Kamera\n- Beide Geräte bieten hochwertige Kameraeinstellungen und hochauflösende Bildqualität.\n- Der iPhone 13 Pro Max verfügt über eine 12-MP-Sucher und eine 12-MP-Ultraweitwinkelkamera, während die Galaxy S22 Ultra eine 50-MP-Primärkamera und eine 10-MP-Ultraweitwinkelkamera bietet.\n- Der Nachverfolgungsmodus der Galaxy S22 Ultra ist zudem für sich allein schneller und einfacher zu bedienen.\n\nTeil 4: Benutzererfahrung\n- Die Benutzeroberfläche des iPhone 13 Pro Max ist intuitiver und einfach zu bedienen, während die Galaxy S22 Ultra je nach Vorlieben der Nutzer eine individuelle Anpassung erfordern kann.\n- Die Nutzung von Apple-Diensten wie FaceTime und iCloud ist auf den iPhone 13 Pro Max bequemer, während die Nutzung von Samsung-Diensten auf der Galaxy S22 Ultra ein wenig komplexer sein kann.\n\nFazit\nOb du ein Fan von Apple- oder Samsung-Geräten bist, gibt es für beide Modelle in dieser Vergleichsrunde etwas, was dich begeistert. Der iPhone 13 Pro Max überzeugt mit seiner besseren Bildschirmdichte, der Galaxy S22 Ultra mit ihrer schnelleren Kamera und dem einfacheren Bedienungserlebnis. Letztendlich hängt die Wahl vom persönlichen Vorlieben und Anwendungsbereich ab.","In einer Welt, in der Smartphones wachsen und gedeihen,\nLiegt das Vergleichsspiel, wo zwei Herrschaften kämpfen.\nEin iPhone mit einem Max,\nUnd eine Galaxy, so scharf,\nMit einer Kamera, die alle fesseln wird!
```
**Prompt**
> Sie sehen ein schönes rotes Haus auf Ihrer linken Seite und ein faszinierendes Gewächshaus auf Ihrer rechten Seite, einen attraktiven beheizten rosa Ort vor sich. Also, wo ist das Weiße Haus?", "Enthält die ursprüngliche Frage Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen?"],"reference": ["Die Antwort ist \"Washington, DC\".", "Nein.
**Antwort SauerkrautLM-7b-v1-mistral:**
```
Das Weiße Haus ist direkt in der Mitte zwischen dem roten und dem beheizten roten Ort.","Nein, die ursprüngliche Frage enthält keine direkten Hinweise, um den Standort des Weißen Hauses eindeutig zu bestimmen.
```
**Prompt**
> Betrachten Sie einen Satelliten, der sich in einer kreisförmigen Umlaufbahn um die Erde befindet. Die Geschwindigkeit des Satelliten nimmt ab. Was wird mit dem orbitalen Radius des Satelliten und seiner Umlaufdauer passieren? Bitte rechtfertigen Sie Ihre Antwort unter Verwendung physikalischer Prinzipien.", "Was sind einige Sonderfälle oder Randfälle in Ihrer Lösung? Wie gehen Sie mit ihnen um?"] ,"reference": ["Der orbitale Radius wird zunehmen und die Umlaufdauer wird zunehmen
**Antwort SauerkrautLM-7b-v1-mistral:**
```
Wenn ein Satellit sich in einer kreisförmigen Umlaufbahn um die Erde befindet, bewegt er sich mit einer konstanten Geschwindigkeit. Allerdings wird diese Geschwindigkeit im Laufe der Zeit aufgrund von Faktoren wie der Entfernung des Satelliten vom Erdmittelpunkt und der Gravitationskraft abnehmen.\n\nDurch die Abnahme der Geschwindigkeit des Satelliten verringert sich auch die Orbitalgeschwindigkeit. Gemäß dem sogenannten Kepler'schen Gesetz ändert sich die Geschwindigkeit eines Satelliten proportional zur Entfernung vom Zentrum der umlaufenden Umlaufbahn. Dadurch ändert sich auch die Umlaufdauer, da die Zeit, die der Satellit benötigt, um eine bestimmte Strecke zurückzulegen, proportional zur Geschwindigkeit und der Entfernung ist.\n\nDaher wird die Orbitalgeschwindigkeit des Satelliten aufgrund der Abnahme der Geschwindigkeit langsamer, und somit wird auch seine Umlaufdauer länger.","In meiner Lösung der Veränderung des Orbitalradius und der Umlaufdauer eines Satelliten müssen wir einige Sonderfälle und Randfälle berücksichtigen.\n\nEin Sonderfall ist zum Beispiel, wenn der Satellit in eine geostationäre Umlaufbahn um die Erde gelangen kann. In dieser Umlaufbahn befindet sich der Satellit in einer konstanten Höhenlage und scheint den Menschen aus der Erdatmosphäre ausgesetzt zu sein. Der Satellit dreht sich dabei jedoch nur um die Erde und betrachtet die Erde von der Brennkammer aus.\n\nIn diesem Fall bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufdauer wird jedoch durch die Entfernung vom Erdmittelpunkt beeinflusst, da die Erde sich während der Umlaufzeit durch ihre Rotation weiterdreht. Da der Satellit sich immer in derselben Höhe befindet, ändert sich jedoch das Verhältnis von Umlaufzeit zu Umlaufdauer nur geringfügig.\n\nEin weiterer Sonderfall ist, wenn der Satellit in eine Polarumlaufbahn gelangen kann. In einer Polarumlaufbahn umkreist der Satellit die Erde in einer Höhe von etwa 500 Kilometern über dem Polarkreis. In dieser Umlaufbahn bleibt der Orbitalradius des Satelliten konstant, da er immer eine höhere Position einnimmt und somit immer die gleiche Höhe beibehalten muss, um die Erde aufzubewahren. Die Umlaufzeit wird jedoch je nach Abstand vom Erdmittelpunkt variieren, da die Gravitationskraft mit der Entfernung abnimmt.\n\nIm Falle der Polarumlaufbahn ändern sich die Umlaufzeit und der Orbitalradius des Satelliten aufgrund der Abnahme der Gravitationskraft. Je weiter der Satellit von der Erde entfernt ist, desto langsamer dreht er sich und desto weniger Gravitation wirkt sich auf ihn aus. Dadurch ändert sich das Verhältnis von Umlaufzeit zu Umlaufdauer und somit auch die Größe des Orbitalradii.\n\nIn den genannten Sonderfällen bleibt die Umgebung des Satelliten und seine Bewegung im Grunde unverändert. Allerdings ändert sich aufgrund der Variation von Gravitationskraft, Entfernung und Umlaufhöhe der Satellit seine Umlaufzeit und sein Orbitalradius.
```
## Disclaimer
Our models have been meticulously trained on extensive datasets. While we have made diligent efforts to thoroughly screen and eliminate any instances of coarse or inappropriate language from our data, we must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out.
Furthermore, it is important to note that we have implemented filters within our models; however, we cannot always guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:vaziri@vago-solutions.de). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
| 32,220 | [
[
-0.037994384765625,
-0.05340576171875,
0.0151214599609375,
0.01136016845703125,
-0.0149383544921875,
-0.016693115234375,
0.00168609619140625,
-0.0285491943359375,
0.01232147216796875,
0.031585693359375,
-0.039306640625,
-0.043609619140625,
-0.0269775390625,
... |
indobenchmark/indobart-v2 | 2022-06-21T17:52:37.000Z | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"dataset:Indo4B+",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"has_space",
"region:us"
] | text2text-generation | indobenchmark | null | null | indobenchmark/indobart-v2 | 7 | 524 | transformers | 2022-03-02T23:29:05 | ---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoBART-v2 Model
[IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
| 1,394 | [
[
-0.0281219482421875,
-0.0443115234375,
0.0019474029541015625,
0.03826904296875,
-0.052154541015625,
-0.01190948486328125,
-0.0345458984375,
-0.030914306640625,
0.01493072509765625,
0.044952392578125,
-0.02606201171875,
-0.0111083984375,
-0.035003662109375,
0... |
FredZhang7/paint-journey-v2 | 2023-02-05T07:14:46.000Z | [
"diffusers",
"text-to-image",
"midjourney",
"stable-diffusion",
"disco-diffusion",
"art",
"arxiv:2208.12242",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | FredZhang7 | null | null | FredZhang7/paint-journey-v2 | 31 | 524 | diffusers | 2023-01-03T01:25:12 | ---
license: creativeml-openrail-m
language:
- en
tags:
- text-to-image
- midjourney
- stable-diffusion
- disco-diffusion
- art
- arxiv:2208.12242
inference: true
library_name: diffusers
---
## Paint Journey V2 is [V1](https://huggingface.co/FredZhang7/paint-journey-v1) fine-tuned on 768x768 oil paintings by Midjourney V4, Open Journey V2, Disco Diffusion, and artists given permission
Begin the prompt with **((oil painting))** to add the oil paint effect. For digital and other painting styles, use similar prompts as you would for Midjourney V4 (with some tweaks), Stable Diffusion v1.5 (add more styles), Open Journey V2, or Disco Diffusion.
[](https://colab.research.google.com/github/AMLA-UBC/100-Exploring-the-World-of-Modern-Machine-Learning/blob/main/assets/PaintJourneyV2.ipynb)
## Examples
*All examples were generated using Camenduru's WebUI (see the Colab file)*

*⬆️ 768x1136 portraits, generated using descriptive prompts and without face restoration, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/character_settings.txt)*

*⬆️ 1280x768 (mostly) natural landscapes, used shorter prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/nature_settings.txt)*

*⬆️ 1152x768 outerspace landscapes, used descriptive prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/outerspace_settings.txt)*

*⬆️ 1280x768 lamborghini, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/lamborghini_settings.txt)*

*⬆️ 960x768 Eevee, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/eevee_settings.txt)*
## Comparisons
Paint Journey V2's paintings are closer to human-drawn art than Open Journey V2.
Compared to models like Dreamlike Diffusion 1.0, PJ V2 tends to generate 768x768 or higher resolution images with reduced noise levels.
This model is also capable of generating stunning portraits at 768x1136 resolution without duplicated faces (with [Camenduru's WebUI](https://github.com/camenduru/stable-diffusion-webui)), a difficult task to models like DreamShaper 3.3.
At lower resolutions, DreamShaper 3.3 tends to generate higher quality portraits than PJ V2 in terms of noise levels, given the same (short) postive and negative prompts.
However, PJ V2 can craft more stunning masterpieces with more descriptive positive and negative prompts and can still generate beautiful landscapes with shorter prompts.
## Training
Instead of solely fine-tuning its Unet, Paint Journey V2 focuses on fine-tuning its text encoder with a diverse range of prompts.
This allows for a seamless blend of the digital and oil painting styles into various other types of prompts, resulting in a more natural and dynamic output.
This model was trained on a curated dataset of roughly 300 images hand-picked from Midjourney, [Prompt Hero](https://prompthero.com/), [PixaBay](https://pixabay.com/images/search/paintings/), Open Journey V2, and Reddit.
Before training, I used R-ESRGAN 4x on many images to increase their resolution and reduce noise.
## Running out of prompts?
Useful resources: [Lexica.art](https://lexica.art/), [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2), [Prompt Hero](https://prompthero.com/)
## Output Dimensions
Portrait sizes include, but are not limited to, `512x768`, `768x768`, and `768x1136`.
Landscape sizes include, but are not limited to, `768x512`, `768x768`, `1152x768`, and `1280x768`.
## Camenduru's WebUI
```
git clone -b v1.6 https://github.com/camenduru/stable-diffusion-webui
```
<details>
<summary> Click to use Automatic1111's Webui instead, but may not output images as artistic </summary>
```
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
```
</details>
Download [checkpoint](./paint_journey_v2.ckpt) and [vae](./paint_journey_v2.vae.pt) to the `./stable-diffusion-webui/models/Stable-diffusion` folder. Run `webui-user.bat`.
## 🧨 Diffusers
*Tip: using double, tripple, or quadriple brackets around some letters WORD (e.g. "((WORD))") will put an 'emphasis' on WORD*
```bash
pip install --upgrade diffusers transformers
```
```python
# see more sampling algorithms at https://huggingface.co/docs/diffusers/using-diffusers/schedulers#changing-the-scheduler
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch, random, datetime
pipe = StableDiffusionPipeline.from_pretrained("FredZhang7/paint-journey-v2")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
def random_seed():
return random.randint(0, 2**32 - 1)
prompt = "((oil painting)), gentle waves, bright blue sky, white sails billowing, sun glistening on the surface, salty sea air, distant horizon, calm breeze, birds soaring overhead, vibrant colors, artstation digital painting, high resolution, uhd, 4 k, 8k wallpaper" # what you want to see
negative_prompt = "low-res, blurry, haze, dark clouds looming, choppy waves, engine failing, sails tattered, stormy winds".split(", ") # what you don't want to see
seed = random_seed() # replace with the desired seed if needed
width, height = 1280, 768 # width and height of the generated image
cfg_scale = 7.5 # classifer free guidance scale, smaller means more creative, 7 to 11 is usually a good range
num_inference_steps = 40 # sampling steps, 30 to 40 is usually good for Euler Ancestral
generator = torch.Generator("cuda").manual_seed(seed)
with torch.autocast("cuda"):
image = pipe(prompt=prompt,
num_inference_steps=num_inference_steps,
width=width, height=height,
generator=generator,
guidance_scale=cfg_scale).images[0]
def generate_filename(string, seed):
invalid_chars = ["<", ">", ":", '"', "/", "\\", "|", "?", "*"]
for char in invalid_chars:
string = string.replace(char, "")
return f"{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}_{seed}_{string}"
image.save(f"./{generate_filename(prompt, seed)}.png")
```
## Safety Checker V2
The official [stable diffusion safety checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker) uses up 1.22GB VRAM.
I recommend using [Google Safesearch Mini V2](https://huggingface.co/FredZhang7/google-safesearch-mini-v2) (220MB) to save 1.0GB VRAM. | 6,742 | [
[
-0.04107666015625,
-0.035980224609375,
0.041412353515625,
0.032318115234375,
-0.00650787353515625,
0.0014314651489257812,
0.0081329345703125,
-0.038787841796875,
0.0204010009765625,
0.035858154296875,
-0.06610107421875,
-0.03228759765625,
-0.042144775390625,
... |
DucHaiten/DucHaiten-AnimeFurry | 2023-07-26T10:50:00.000Z | [
"diffusers",
"art",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | DucHaiten | null | null | DucHaiten/DucHaiten-AnimeFurry | 2 | 524 | diffusers | 2023-05-15T11:27:02 | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
library_name: diffusers
tags:
- art
---
cute furry
illustration, 3d, 2d, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, vaginas in breasts, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, bad image, bad photo










 | 3,415 | [
[
-0.05926513671875,
-0.0241241455078125,
0.032989501953125,
0.041961669921875,
-0.0182037353515625,
0.01381683349609375,
0.0239715576171875,
-0.052001953125,
0.0653076171875,
0.0552978515625,
-0.04449462890625,
-0.034393310546875,
-0.0303802490234375,
0.01420... |
luodian/OTTER-MPT7B-Init | 2023-07-18T14:36:35.000Z | [
"transformers",
"pytorch",
"flamingo",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | luodian | null | null | luodian/OTTER-MPT7B-Init | 1 | 524 | transformers | 2023-07-07T13:15:32 | ---
license: mit
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/MKmyP9wH/new-banner.png" width="80%" height="80%">
</p>
<div>
<div align="center">
<a href='https://brianboli.com/' target='_blank'>Bo Li*<sup>1</sup></a> 
<a href='https://zhangyuanhan-ai.github.io/' target='_blank'>Yuanhan Zhang*<sup>,1</sup></a> 
<a href='https://cliangyu.com/' target='_blank'>Liangyu Chen*<sup>,1</sup></a> 
<a href='https://king159.github.io/' target='_blank'>Jinghao Wang*<sup>,1</sup></a> 
<a href='https://pufanyi.github.io/' target='_blank'>Fanyi Pu*<sup>,1</sup></a> 
</br>
<a href='https://jingkang50.github.io/' target='_blank'>Jingkang Yang<sup>1</sup></a> 
<a href='https://chunyuan.li/' target='_blank'>Chunyuan Li<sup>2</sup></a> 
<a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu<sup>1</sup></a>
</div>
<div>
<div align="center">
<sup>1</sup>S-Lab, Nanyang Technological University 
<sup>2</sup>Microsoft Research, Redmond
</div>
This weight is for **initilizing training for Otter**. It's directly converted from Openflamingo.
You can load and try this model using
```python
model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-MPT7B-Init", device_map="sequential")
model.text_tokenizer.padding_side = "left"
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
model.eval()
```
You can also start training Otter via the commands
```python
python -m accelerate.commands.launch --config_file=./pipeline/accelerate_configs/accelerate_config_fsdp.yaml \
pipeline/train/instruction_following.py \
--pretrained_model_name_or_path=luodian/OTTER-MPT7B-Init \
--mimicit_path=/data/azure_storage/otter/mimicit/xx/xx_instructions.json \
--images_path=/data/azure_storage/otter/mimicit/xx/xx.json \
--batch_size=4 --num_epochs=1 --report_to_wandb \
--wandb_entity=ntu-slab \
--external_save_dir=/data/bli/checkpoints \
--save_hf_model \
--run_name=OTTER-MPT1B \
--wandb_project=OTTER-MPT1B \
--workers=4 \
--lr_scheduler=cosine \
--learning_rate=1e-5 \
--warmup_steps_ratio=0.01
```
If you wish to init a video instruction tuning, you should add
```json
"max_num_frames": 128
```
to `config.json` inside the folder. | 2,280 | [
[
-0.0401611328125,
-0.0238494873046875,
0.0006361007690429688,
0.0283203125,
-0.0209503173828125,
-0.00391387939453125,
0.01361846923828125,
-0.0301361083984375,
0.02197265625,
-0.0014333724975585938,
-0.053314208984375,
-0.0226287841796875,
-0.03802490234375,
... |
LinkSoul/Chinese-Llama-2-7b-4bit | 2023-08-16T03:23:15.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:LinkSoul/instruction_merge_set",
"license:openrail",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | LinkSoul | null | null | LinkSoul/Chinese-Llama-2-7b-4bit | 69 | 524 | transformers | 2023-07-22T13:21:52 | ---
license: openrail
datasets:
- LinkSoul/instruction_merge_set
language:
- zh
- en
widget:
- text: "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n用中文回答,When is the best time to visit Beijing, and do you have any suggestions for me? [/INST]"
example_title: "北京"
- text: "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n用英文回答,特朗普是谁? [/INST]"
example_title: "特朗普是谁"
---
# Chinese Llama 2 7B 4bit
全部开源,完全可商用的**中文版 Llama2 模型及中英文 SFT 数据集**,输入格式严格遵循 *llama-2-chat* 格式,兼容适配所有针对原版 *llama-2-chat* 模型的优化。

## 基础演示

## 在线试玩
> Talk is cheap, Show you the Demo.
- [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/Chinese-Llama-2-7b)
- [Colab 一键启动](#) // 正在准备
## 资源下载
- 模型下载:[Chinese Llama2 Chat Model](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b)
- 4bit量化:[Chinese Llama2 4bit Chat Model](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b-4bit)
> 我们使用了中英文 SFT 数据集,数据量 1000 万。
- 数据集:[https://huggingface.co/datasets/LinkSoul/instruction_merge_set](https://huggingface.co/datasets/LinkSoul/instruction_merge_set)
- 训练及推理代码:[https://github.com/LinkSoul-AI/Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b)
## 快速测试
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
# Original version
# model_path = "LinkSoul/Chinese-Llama-2-7b"
# 4 bit version
model_path = "LinkSoul/Chinese-Llama-2-7b-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
if model_path.endswith("4bit"):
model = AutoModelForCausalLM.from_pretrained(
model_path,
load_in_4bit=True,
torch_dtype=torch.float16,
device_map='auto'
)
else:
model = AutoModelForCausalLM.from_pretrained(model_path).half().cuda()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
instruction = """[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{} [/INST]"""
prompt = instruction.format("用英文回答,什么是夫妻肺片?")
generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=4096, streamer=streamer)
```
## 相关项目
- [Llama2](https://ai.meta.com/llama/)
## 项目协议
[Apache-2.0 license](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/LICENSE)
## 微信交流群
欢迎加入[微信群](.github/QRcode.jpg)
| 3,872 | [
[
-0.0198822021484375,
-0.0660400390625,
0.01470184326171875,
0.04119873046875,
-0.05267333984375,
0.006153106689453125,
0.004459381103515625,
-0.05426025390625,
0.0237579345703125,
0.00461578369140625,
-0.040283203125,
-0.033660888671875,
-0.05145263671875,
0... |
topel/ConvNeXt-Tiny-AT | 2023-10-04T14:17:07.000Z | [
"safetensors",
"audio tagging",
"audio events",
"audio embeddings",
"convnext-audio",
"audioset",
"license:mit",
"region:us"
] | null | topel | null | null | topel/ConvNeXt-Tiny-AT | 7 | 524 | null | 2023-09-22T10:39:50 | ---
license: mit
tags:
- audio tagging
- audio events
- audio embeddings
- convnext-audio
- audioset
inference: false
---
**ConvNeXt-Tiny-AT** is an audio tagging CNN model, trained on **AudioSet** (balanced+unbalanced subsets). It reached 0.471 mAP on the test set [(Paper)](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html).
The model was trained on audio recordings of duration 10 seconds, and sample rate 32kHz, but you can provide any audio file, we have included resampling and padding/cropping in the following code snippet.
The model provides logits and probabilities for the 527 audio event tags of AudioSet (see http://research.google.com/audioset/index.html).
Two methods can also be used to get scene embeddings (a single vector per file) and frame-level embeddings, see below.
The scene embedding is obtained from the frame-level embeddings, on which mean pooling is applied onto the frequency dim, followed by mean pooling + max pooling onto the time dim.
# Install
This code is based on our repo: https://github.com/topel/audioset-convnext-inf
You can pip install it:
```bash
pip install git+https://github.com/topel/audioset-convnext-inf@pip-install
```
# Usage
Below is an example of how to instantiate the model, make tag predictions on an audio sample, and get embeddings (scene and frame levels).
```python
import os
import numpy as np
import torch
from torch.nn import functional as TF
import torchaudio
import torchaudio.functional as TAF
from audioset_convnext_inf.pytorch.convnext import ConvNeXt
from audioset_convnext_inf.utils.utilities import read_audioset_label_tags
model = ConvNeXt.from_pretrained("topel/ConvNeXt-Tiny-AT", map_location='cpu')
print(
"# params:",
sum(param.numel() for param in model.parameters() if param.requires_grad),
)
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
if "cuda" in str(device):
model = model.to(device)
```
Output:
```
# params: 28222767
```
## Inference: get logits and probabilities
To run the following, first download ```254906__tpellegrini__cavaco1.wav``` and ```class_labels_indices.csv``` from this repository.
```python
sample_rate = 32000
audio_target_length = 10 * sample_rate # 10 s
# AUDIO_FNAME = "f62-S-v2swA_200000_210000.wav"
AUDIO_FNAME = "254906__tpellegrini__cavaco1.wav"
current_dir=os.getcwd()
AUDIO_FPATH = os.path.join(current_dir, AUDIO_FNAME)
waveform, sample_rate_ = torchaudio.load(AUDIO_FPATH)
if sample_rate_ != sample_rate:
print("Resampling from %d to 32000 Hz"%sample_rate_)
waveform = TAF.resample(
waveform,
sample_rate_,
sample_rate,
)
if waveform.shape[-1] < audio_target_length:
print("Padding waveform")
missing = max(audio_target_length - waveform.shape[-1], 0)
waveform = TF.pad(waveform, (0,missing), mode="constant", value=0.0)
elif waveform.shape[-1] > audio_target_length:
print("Cropping waveform")
waveform = waveform[:, :audio_target_length]
waveform = waveform.contiguous()
waveform = waveform.to(device)
print("\nInference on " + AUDIO_FNAME + "\n")
with torch.no_grad():
model.eval()
output = model(waveform)
logits = output["clipwise_logits"]
print("logits size:", logits.size())
probs = output["clipwise_output"]
# Equivalent: probs = torch.sigmoid(logits)
print("probs size:", probs.size())
lb_to_ix, ix_to_lb, id_to_ix, ix_to_id = read_audioset_label_tags(os.path.join(current_dir, "class_labels_indices.csv"))
threshold = 0.25
sample_labels = np.where(probs[0].clone().detach().cpu() > threshold)[0]
print("\nPredicted labels using activity threshold 0.25:\n")
# print(sample_labels)
for l in sample_labels:
print("%s: %.3f"%(ix_to_lb[l], probs[0,l]))
```
Output:
```
Inference on 254906__tpellegrini__cavaco1.wav
Resampling rate from 44100 to 32000 Hz
Padding waveform
logits size: torch.Size([1, 527])
probs size: torch.Size([1, 527])
Predicted labels using activity threshold 0.25:
[137 138 139 140 149 151]
Music: 0.896
Musical instrument: 0.686
Plucked string instrument: 0.608
Guitar: 0.369
Mandolin: 0.710
Ukulele: 0.268
```
Technically speaking, it's not a Mandolin nor a Ukulele, but a Brazilian cousin, the cavaquinho!
## Get audio scene embeddings
```python
with torch.no_grad():
model.eval()
output = model.forward_scene_embeddings(waveform)
print("\nScene embedding, shape:", output.size())
```
Output:
```
Scene embedding, shape: torch.Size([1, 768])
```
## Get frame-level embeddings
```python
with torch.no_grad():
model.eval()
output = model.forward_frame_embeddings(waveform)
print("\nFrame-level embeddings, shape:", output.size())
```
Output:
```
Frame-level embeddings, shape: torch.Size([1, 768, 31, 7])
```
# Zenodo
The checkpoint is also available on Zenodo: https://zenodo.org/record/8020843/files/convnext_tiny_471mAP.pth?download=1
# Citation
[Paper available](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html)
Cite as: Pellegrini, T., Khalfaoui-Hassani, I., Labbé, E., Masquelier, T. (2023) Adapting a ConvNeXt Model to Audio Classification on AudioSet. Proc. INTERSPEECH 2023, 4169-4173, doi: 10.21437/Interspeech.2023-1564
```bibtex
@inproceedings{pellegrini23_interspeech,
author={Thomas Pellegrini and Ismail Khalfaoui-Hassani and Etienne Labb\'e and Timoth\'ee Masquelier},
title={{Adapting a ConvNeXt Model to Audio Classification on AudioSet}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
pages={4169--4173},
doi={10.21437/Interspeech.2023-1564}
}
```
| 5,602 | [
[
-0.03765869140625,
-0.048309326171875,
0.00850677490234375,
0.007587432861328125,
-0.0185089111328125,
-0.0204620361328125,
-0.03240966796875,
-0.0206451416015625,
0.0190582275390625,
0.0140228271484375,
-0.050445556640625,
-0.05279541015625,
-0.03350830078125,
... |
shellypeng/anime-style-soft | 2023-10-13T15:27:33.000Z | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | unconditional-image-generation | shellypeng | null | null | shellypeng/anime-style-soft | 0 | 524 | diffusers | 2023-10-13T15:14:00 | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('shellypeng/anime-style-soft')
image = pipeline().images[0]
image
```
| 467 | [
[
-0.0252685546875,
-0.05523681640625,
0.0231170654296875,
0.0233612060546875,
-0.04290771484375,
-0.02838134765625,
0.033721923828125,
0.00753021240234375,
0.0162811279296875,
0.03277587890625,
-0.04754638671875,
-0.035400390625,
-0.035736083984375,
-0.010307... |
prithivida/formal_to_informal_styletransfer | 2021-06-21T08:08:37.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | prithivida | null | null | prithivida/formal_to_informal_styletransfer | 5 | 523 | transformers | 2022-03-02T23:29:05 | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
| 134 | [
[
0.01120758056640625,
-0.0225982666015625,
0.01119232177734375,
0.0287628173828125,
-0.01194000244140625,
0.01468658447265625,
0.031036376953125,
0.0175018310546875,
0.0252685546875,
0.058441162109375,
-0.05780029296875,
-0.021209716796875,
-0.0167236328125,
... |
sentence-transformers/nli-roberta-base-v2 | 2022-06-15T22:41:43.000Z | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | sentence-transformers | null | null | sentence-transformers/nli-roberta-base-v2 | 0 | 523 | sentence-transformers | 2022-03-02T23:29:05 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/nli-roberta-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-roberta-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-roberta-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/nli-roberta-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-roberta-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 3,675 | [
[
-0.017791748046875,
-0.057586669921875,
0.0212554931640625,
0.033111572265625,
-0.017578125,
-0.03265380859375,
-0.0269775390625,
-0.0092620849609375,
0.0141448974609375,
0.0301971435546875,
-0.045989990234375,
-0.03338623046875,
-0.051666259765625,
0.010368... |
SALT-NLP/FLANG-BERT | 2022-11-29T17:06:37.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | SALT-NLP | null | null | SALT-NLP/FLANG-BERT | 4 | 523 | transformers | 2022-06-24T02:37:04 |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-BERT
FLANG-BERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the BERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-BERT related issues and questions.
---
license: afl-3.0
--- | 2,795 | [
[
-0.036376953125,
-0.06298828125,
-0.0013952255249023438,
0.0293121337890625,
-0.0029754638671875,
-0.004474639892578125,
-0.034149169921875,
-0.0537109375,
0.005588531494140625,
0.032012939453125,
-0.017181396484375,
-0.028228759765625,
-0.0423583984375,
0.0... |
OPERFIND/step1 | 2023-07-20T16:55:28.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | OPERFIND | null | null | OPERFIND/step1 | 0 | 523 | diffusers | 2023-07-20T16:43:18 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### step1 Dreambooth model trained by OPERFIND with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 495 | [
[
-0.028717041015625,
-0.066650390625,
0.042572021484375,
0.0263519287109375,
-0.0236358642578125,
0.0224761962890625,
0.0271759033203125,
-0.0201568603515625,
0.04144287109375,
0.0164947509765625,
-0.0186309814453125,
-0.02008056640625,
-0.0281829833984375,
-... |
ptx0/terminus-xl-gamma-training | 2023-11-05T21:37:55.000Z | [
"diffusers",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | ptx0 | null | null | ptx0/terminus-xl-gamma-training | 0 | 523 | diffusers | 2023-10-04T02:59:10 | ---
license: openrail++
---
# Terminus XL - Gamma (v2 preview)
This is an in-progress checkpoint of [the "Gamma" model](/ptx0/terminus-xl-gamma-v1) from the Terminus XL series.
It's updated randomly for evaluation as progress rolls on. | 238 | [
[
-0.0291290283203125,
-0.039886474609375,
0.061248779296875,
-0.00846099853515625,
-0.0211029052734375,
0.0095367431640625,
0.052642822265625,
-0.006359100341796875,
0.0390625,
0.0401611328125,
-0.096435546875,
-0.004955291748046875,
-0.0362548828125,
0.00128... |
howtodowtle/adiaz-1-not-good | 2022-12-17T19:12:46.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | howtodowtle | null | null | howtodowtle/adiaz-1-not-good | 0 | 522 | diffusers | 2022-12-17T19:09:25 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### adiaz_1_not_good Dreambooth model trained by howtodowtle with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:

| 827 | [
[
-0.03253173828125,
-0.0556640625,
0.031219482421875,
0.037109375,
-0.03216552734375,
0.02001953125,
0.0167694091796875,
-0.03424072265625,
0.04254150390625,
0.003635406494140625,
-0.02001953125,
-0.0285186767578125,
-0.042327880859375,
-0.0177154541015625,
... |
SaiedAlshahrani/arywiki_20230101_roberta_mlm_bots | 2023-10-29T01:07:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"ar",
"dataset:SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_bots",
"dataset:SaiedAlshahrani/MASD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | SaiedAlshahrani | null | null | SaiedAlshahrani/arywiki_20230101_roberta_mlm_bots | 0 | 522 | transformers | 2023-04-29T15:44:28 | ---
tags:
- generated_from_trainer
model-index:
- name: aryRoBERTa
results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_bots
- SaiedAlshahrani/MASD
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو <mask>
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Moroccan Arabic Wikipedia (aryRoBERTa<sub>BASE</sub>)
This aryRoBERTa<sub>BASE</sub> model has been trained *from scratch* on the Moroccan Arabic Wikipedia articles, downloaded on the 1st of January 2023, processed using
`Gensim` Python library, preprocessed using `tr` Linux/Unix utility and `CAMeLTools` Python toolkit for Arabic NLP, and hosted here at [SaiedAlshahrani/Moroccan\_Arabic\_Wikipedia\_20230101\_bots](https://huggingface.co/datasets/SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_bots).
It achieves the following results on the evaluation set:
- Pseudo-Perplexity: 5,379.89
## Model description
We trained this Moroccan Arabic Wikipedia Masked Language Model (aryRoBERTa<sub>BASE</sub>) to evaluate its performance using the Fill-Mask evaluation task and the Masked Arab States Dataset ([MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)) dataset and measure the *impact* of **template-based translation** on the Egyptian Arabic Wikipedia edition.
For more details about the experiment, please **read** and **cite** our paper:
```bash
@inproceedings{alshahrani-etal-2023-implications,
title = "{{Performance Implications of Using Unrepresentative Corpora in Arabic Natural Language Processing}}",
author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
month = dec,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://webspace.clarkson.edu/~alshahsf/unrepresentative_corpora.pdf",
doi = "#################",
pages = "###--###",
abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
```
## Intended uses & limitations
We do **not** recommend using this model because it was trained *only* on the Moroccan Arabic Wikipedia articles, <u>unless</u> you fine-tune the model on a large, organic, and representative Moroccan Arabic dataset.
## Training and evaluation data
We have trained this model on the Moroccan Arabic Wikipedia articles ([SaiedAlshahrani/Moroccan\_Arabic\_Wikipedia\_20230101\_bots](https://huggingface.co/datasets/SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_bots)) without using any validation or evaluation data (only training data) due to a lack of computational power.
## Training procedure
We have trained this model using the Paperspace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1 | 35 | 9.561500 |
| 2 | 70 | 7.946000 |
| 3 | 105 | 7.420400 |
| 4 | 140 | 7.197800 |
| 5 | 175 | 7.174400 |
| Train Runtime | Train Samples Per Second | Train Steps Per Second | Total Flos | Train Loss | Epoch |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 192.684800 | 121.260000 | 0.960000 | 774708261150720.000000 | 7.812142 | 5.000000 |
### Evaluation results
This aryRoBERTa<sub>BASE</sub> model has been evaluated on the Masked Arab States Dataset ([SaiedAlshahrani/MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)).
| K=10 | K=50 | K=100 |
|:----:|:-----:|:----:|
| 0.00%| 0.00% | 0.62% |
### Framework versions
- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116 | 5,972 | [
[
-0.050689697265625,
-0.043548583984375,
0.0018177032470703125,
-0.0025501251220703125,
-0.02288818359375,
0.0036640167236328125,
-0.0274810791015625,
-0.055511474609375,
0.010040283203125,
0.0182952880859375,
-0.0272369384765625,
-0.04681396484375,
-0.0535888671... |
digiplay/HenmixArt_v1 | 2023-07-18T08:19:02.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/HenmixArt_v1 | 3 | 522 | diffusers | 2023-06-10T15:21:21 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/74158/henmixart
Sample images:



| 616 | [
[
-0.043212890625,
-0.0218658447265625,
0.0237579345703125,
0.033905029296875,
-0.03448486328125,
-0.00934600830078125,
0.033050537109375,
-0.017181396484375,
0.035736083984375,
0.0291595458984375,
-0.04937744140625,
-0.0313720703125,
-0.0232086181640625,
0.00... |
amu/tao | 2023-10-20T12:12:51.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | sentence-similarity | amu | null | null | amu/tao | 3 | 522 | sentence-transformers | 2023-10-18T05:51:02 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: tao
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 47.33752515292192
- type: cos_sim_spearman
value: 49.940772056837176
- type: euclidean_pearson
value: 48.12147487857213
- type: euclidean_spearman
value: 49.9407519488174
- type: manhattan_pearson
value: 48.07550286372865
- type: manhattan_spearman
value: 49.89535645392862
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 50.976865711125626
- type: cos_sim_spearman
value: 53.113084748593465
- type: euclidean_pearson
value: 55.1209592747571
- type: euclidean_spearman
value: 53.11308362230699
- type: manhattan_pearson
value: 55.09799309322416
- type: manhattan_spearman
value: 53.108059998577076
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.812
- type: f1
value: 39.02060856097395
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 62.84336868097746
- type: cos_sim_spearman
value: 65.540605433497
- type: euclidean_pearson
value: 64.08759819387913
- type: euclidean_spearman
value: 65.54060543369363
- type: manhattan_pearson
value: 64.09334283385029
- type: manhattan_spearman
value: 65.55376209169398
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 39.964020691388505
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.18628830038994
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.34294439514511
- type: mrr
value: 88.03849206349206
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 85.87127698007234
- type: mrr
value: 88.57980158730159
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.484
- type: map_at_10
value: 36.3
- type: map_at_100
value: 38.181
- type: map_at_1000
value: 38.305
- type: map_at_3
value: 32.39
- type: map_at_5
value: 34.504000000000005
- type: mrr_at_1
value: 37.608999999999995
- type: mrr_at_10
value: 45.348
- type: mrr_at_100
value: 46.375
- type: mrr_at_1000
value: 46.425
- type: mrr_at_3
value: 42.969
- type: mrr_at_5
value: 44.285999999999994
- type: ndcg_at_1
value: 37.608999999999995
- type: ndcg_at_10
value: 42.675999999999995
- type: ndcg_at_100
value: 50.12799999999999
- type: ndcg_at_1000
value: 52.321
- type: ndcg_at_3
value: 37.864
- type: ndcg_at_5
value: 39.701
- type: precision_at_1
value: 37.608999999999995
- type: precision_at_10
value: 9.527
- type: precision_at_100
value: 1.555
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.547
- type: precision_at_5
value: 15.504000000000001
- type: recall_at_1
value: 24.484
- type: recall_at_10
value: 52.43299999999999
- type: recall_at_100
value: 83.446
- type: recall_at_1000
value: 98.24199999999999
- type: recall_at_3
value: 37.653
- type: recall_at_5
value: 43.643
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 77.71497294046902
- type: cos_sim_ap
value: 86.84542027578229
- type: cos_sim_f1
value: 79.31987247608926
- type: cos_sim_precision
value: 72.70601987142022
- type: cos_sim_recall
value: 87.2574234276362
- type: dot_accuracy
value: 77.71497294046902
- type: dot_ap
value: 86.86514752961159
- type: dot_f1
value: 79.31987247608926
- type: dot_precision
value: 72.70601987142022
- type: dot_recall
value: 87.2574234276362
- type: euclidean_accuracy
value: 77.71497294046902
- type: euclidean_ap
value: 86.84541456571337
- type: euclidean_f1
value: 79.31987247608926
- type: euclidean_precision
value: 72.70601987142022
- type: euclidean_recall
value: 87.2574234276362
- type: manhattan_accuracy
value: 77.8111846061335
- type: manhattan_ap
value: 86.81148050422539
- type: manhattan_f1
value: 79.41176470588236
- type: manhattan_precision
value: 72.52173913043478
- type: manhattan_recall
value: 87.74842179097499
- type: max_accuracy
value: 77.8111846061335
- type: max_ap
value: 86.86514752961159
- type: max_f1
value: 79.41176470588236
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 68.862
- type: map_at_10
value: 77.079
- type: map_at_100
value: 77.428
- type: map_at_1000
value: 77.432
- type: map_at_3
value: 75.40400000000001
- type: map_at_5
value: 76.227
- type: mrr_at_1
value: 69.02000000000001
- type: mrr_at_10
value: 77.04299999999999
- type: mrr_at_100
value: 77.391
- type: mrr_at_1000
value: 77.395
- type: mrr_at_3
value: 75.44800000000001
- type: mrr_at_5
value: 76.23299999999999
- type: ndcg_at_1
value: 69.02000000000001
- type: ndcg_at_10
value: 80.789
- type: ndcg_at_100
value: 82.27499999999999
- type: ndcg_at_1000
value: 82.381
- type: ndcg_at_3
value: 77.40599999999999
- type: ndcg_at_5
value: 78.87100000000001
- type: precision_at_1
value: 69.02000000000001
- type: precision_at_10
value: 9.336
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.889000000000003
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 68.862
- type: recall_at_10
value: 92.308
- type: recall_at_100
value: 98.84100000000001
- type: recall_at_1000
value: 99.684
- type: recall_at_3
value: 83.087
- type: recall_at_5
value: 86.617
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.063999999999997
- type: map_at_10
value: 78.014
- type: map_at_100
value: 81.021
- type: map_at_1000
value: 81.059
- type: map_at_3
value: 53.616
- type: map_at_5
value: 68.00399999999999
- type: mrr_at_1
value: 87.8
- type: mrr_at_10
value: 91.824
- type: mrr_at_100
value: 91.915
- type: mrr_at_1000
value: 91.917
- type: mrr_at_3
value: 91.525
- type: mrr_at_5
value: 91.752
- type: ndcg_at_1
value: 87.8
- type: ndcg_at_10
value: 85.74199999999999
- type: ndcg_at_100
value: 88.82900000000001
- type: ndcg_at_1000
value: 89.208
- type: ndcg_at_3
value: 84.206
- type: ndcg_at_5
value: 83.421
- type: precision_at_1
value: 87.8
- type: precision_at_10
value: 41.325
- type: precision_at_100
value: 4.8
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 75.783
- type: precision_at_5
value: 64.25999999999999
- type: recall_at_1
value: 25.063999999999997
- type: recall_at_10
value: 87.324
- type: recall_at_100
value: 97.261
- type: recall_at_1000
value: 99.309
- type: recall_at_3
value: 56.281000000000006
- type: recall_at_5
value: 73.467
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 46.800000000000004
- type: map_at_10
value: 56.887
- type: map_at_100
value: 57.556
- type: map_at_1000
value: 57.582
- type: map_at_3
value: 54.15
- type: map_at_5
value: 55.825
- type: mrr_at_1
value: 46.800000000000004
- type: mrr_at_10
value: 56.887
- type: mrr_at_100
value: 57.556
- type: mrr_at_1000
value: 57.582
- type: mrr_at_3
value: 54.15
- type: mrr_at_5
value: 55.825
- type: ndcg_at_1
value: 46.800000000000004
- type: ndcg_at_10
value: 62.061
- type: ndcg_at_100
value: 65.042
- type: ndcg_at_1000
value: 65.658
- type: ndcg_at_3
value: 56.52700000000001
- type: ndcg_at_5
value: 59.518
- type: precision_at_1
value: 46.800000000000004
- type: precision_at_10
value: 7.84
- type: precision_at_100
value: 0.9169999999999999
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 21.133
- type: precision_at_5
value: 14.12
- type: recall_at_1
value: 46.800000000000004
- type: recall_at_10
value: 78.4
- type: recall_at_100
value: 91.7
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 63.4
- type: recall_at_5
value: 70.6
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 48.010773374374764
- type: f1
value: 35.25314495210735
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.01688555347093
- type: ap
value: 56.39167630414159
- type: f1
value: 81.91756262306008
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 71.17867432738112
- type: cos_sim_spearman
value: 77.47954247528372
- type: euclidean_pearson
value: 76.32408876437825
- type: euclidean_spearman
value: 77.47954025694959
- type: manhattan_pearson
value: 76.33345801575938
- type: manhattan_spearman
value: 77.48901582125997
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 27.96333052746654
- type: mrr
value: 26.92023809523809
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 66.144
- type: map_at_10
value: 75.036
- type: map_at_100
value: 75.36
- type: map_at_1000
value: 75.371
- type: map_at_3
value: 73.258
- type: map_at_5
value: 74.369
- type: mrr_at_1
value: 68.381
- type: mrr_at_10
value: 75.633
- type: mrr_at_100
value: 75.91799999999999
- type: mrr_at_1000
value: 75.928
- type: mrr_at_3
value: 74.093
- type: mrr_at_5
value: 75.036
- type: ndcg_at_1
value: 68.381
- type: ndcg_at_10
value: 78.661
- type: ndcg_at_100
value: 80.15
- type: ndcg_at_1000
value: 80.456
- type: ndcg_at_3
value: 75.295
- type: ndcg_at_5
value: 77.14999999999999
- type: precision_at_1
value: 68.381
- type: precision_at_10
value: 9.481
- type: precision_at_100
value: 1.023
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.309
- type: precision_at_5
value: 17.974
- type: recall_at_1
value: 66.144
- type: recall_at_10
value: 89.24499999999999
- type: recall_at_100
value: 96.032
- type: recall_at_1000
value: 98.437
- type: recall_at_3
value: 80.327
- type: recall_at_5
value: 84.733
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.26832548755884
- type: f1
value: 65.97422207086723
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.13046402151984
- type: f1
value: 72.69199129694121
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 50.4
- type: map_at_10
value: 56.645
- type: map_at_100
value: 57.160999999999994
- type: map_at_1000
value: 57.218
- type: map_at_3
value: 55.383
- type: map_at_5
value: 56.08800000000001
- type: mrr_at_1
value: 50.6
- type: mrr_at_10
value: 56.745999999999995
- type: mrr_at_100
value: 57.262
- type: mrr_at_1000
value: 57.318999999999996
- type: mrr_at_3
value: 55.483000000000004
- type: mrr_at_5
value: 56.188
- type: ndcg_at_1
value: 50.4
- type: ndcg_at_10
value: 59.534
- type: ndcg_at_100
value: 62.400999999999996
- type: ndcg_at_1000
value: 64.01299999999999
- type: ndcg_at_3
value: 56.887
- type: ndcg_at_5
value: 58.160000000000004
- type: precision_at_1
value: 50.4
- type: precision_at_10
value: 6.859999999999999
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 20.4
- type: precision_at_5
value: 12.86
- type: recall_at_1
value: 50.4
- type: recall_at_10
value: 68.60000000000001
- type: recall_at_100
value: 82.8
- type: recall_at_1000
value: 95.7
- type: recall_at_3
value: 61.199999999999996
- type: recall_at_5
value: 64.3
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 73.39666666666666
- type: f1
value: 72.86349039489504
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 73.36220898754738
- type: cos_sim_ap
value: 78.50300066088354
- type: cos_sim_f1
value: 75.39370078740157
- type: cos_sim_precision
value: 70.59907834101382
- type: cos_sim_recall
value: 80.8870116156283
- type: dot_accuracy
value: 73.36220898754738
- type: dot_ap
value: 78.50300066088354
- type: dot_f1
value: 75.39370078740157
- type: dot_precision
value: 70.59907834101382
- type: dot_recall
value: 80.8870116156283
- type: euclidean_accuracy
value: 73.36220898754738
- type: euclidean_ap
value: 78.50300066088354
- type: euclidean_f1
value: 75.39370078740157
- type: euclidean_precision
value: 70.59907834101382
- type: euclidean_recall
value: 80.8870116156283
- type: manhattan_accuracy
value: 73.09149972929075
- type: manhattan_ap
value: 78.41160715817406
- type: manhattan_f1
value: 75.3623188405797
- type: manhattan_precision
value: 69.45681211041853
- type: manhattan_recall
value: 82.36536430834214
- type: max_accuracy
value: 73.36220898754738
- type: max_ap
value: 78.50300066088354
- type: max_f1
value: 75.39370078740157
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 91.82000000000001
- type: ap
value: 89.3671278896903
- type: f1
value: 91.8021970144045
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 30.07022294131062
- type: cos_sim_spearman
value: 36.21542804954441
- type: euclidean_pearson
value: 36.37841945307606
- type: euclidean_spearman
value: 36.215513214835546
- type: manhattan_pearson
value: 36.31755715017088
- type: manhattan_spearman
value: 36.16848256918425
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.779755871073505
- type: cos_sim_spearman
value: 38.736220679196606
- type: euclidean_pearson
value: 37.13356686891227
- type: euclidean_spearman
value: 38.73619198602118
- type: manhattan_pearson
value: 37.175466658530816
- type: manhattan_spearman
value: 38.74523158724344
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.9737863254904
- type: cos_sim_spearman
value: 68.88293545840186
- type: euclidean_pearson
value: 67.23730973929247
- type: euclidean_spearman
value: 68.88293545840186
- type: manhattan_pearson
value: 67.30647960940956
- type: manhattan_spearman
value: 68.90553460682702
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 78.99371432933002
- type: cos_sim_spearman
value: 79.36496709214312
- type: euclidean_pearson
value: 78.77721120706431
- type: euclidean_spearman
value: 79.36500761622595
- type: manhattan_pearson
value: 78.82503201285202
- type: manhattan_spearman
value: 79.43915548337401
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.38418982516941
- type: mrr
value: 76.09996131153883
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 77.209
- type: map_at_100
value: 80.838
- type: map_at_1000
value: 80.903
- type: map_at_3
value: 54.196
- type: map_at_5
value: 66.664
- type: mrr_at_1
value: 90.049
- type: mrr_at_10
value: 92.482
- type: mrr_at_100
value: 92.568
- type: mrr_at_1000
value: 92.572
- type: mrr_at_3
value: 92.072
- type: mrr_at_5
value: 92.33
- type: ndcg_at_1
value: 90.049
- type: ndcg_at_10
value: 84.69200000000001
- type: ndcg_at_100
value: 88.25699999999999
- type: ndcg_at_1000
value: 88.896
- type: ndcg_at_3
value: 86.09700000000001
- type: ndcg_at_5
value: 84.68599999999999
- type: precision_at_1
value: 90.049
- type: precision_at_10
value: 42.142
- type: precision_at_100
value: 5.017
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.358
- type: precision_at_5
value: 63.173
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 83.59400000000001
- type: recall_at_100
value: 95.21
- type: recall_at_1000
value: 98.503
- type: recall_at_3
value: 55.849000000000004
- type: recall_at_5
value: 69.986
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.925999999999995
- type: f1
value: 50.16867723626971
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 60.738901671970005
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 57.08563183138733
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 52.0
- type: map_at_10
value: 62.956
- type: map_at_100
value: 63.491
- type: map_at_1000
value: 63.50599999999999
- type: map_at_3
value: 60.733000000000004
- type: map_at_5
value: 62.217999999999996
- type: mrr_at_1
value: 52.0
- type: mrr_at_10
value: 62.956
- type: mrr_at_100
value: 63.491
- type: mrr_at_1000
value: 63.50599999999999
- type: mrr_at_3
value: 60.733000000000004
- type: mrr_at_5
value: 62.217999999999996
- type: ndcg_at_1
value: 52.0
- type: ndcg_at_10
value: 67.956
- type: ndcg_at_100
value: 70.536
- type: ndcg_at_1000
value: 70.908
- type: ndcg_at_3
value: 63.456999999999994
- type: ndcg_at_5
value: 66.155
- type: precision_at_1
value: 52.0
- type: precision_at_10
value: 8.35
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 23.767
- type: precision_at_5
value: 15.58
- type: recall_at_1
value: 52.0
- type: recall_at_10
value: 83.5
- type: recall_at_100
value: 95.5
- type: recall_at_1000
value: 98.4
- type: recall_at_3
value: 71.3
- type: recall_at_5
value: 77.9
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.10000000000001
- type: ap
value: 70.81766065881429
- type: f1
value: 85.5323306120456
---
a try for emebdding model | 25,182 | [
[
-0.002620697021484375,
-0.07745361328125,
0.040313720703125,
0.0182037353515625,
-0.018463134765625,
-0.03570556640625,
0.00754547119140625,
-0.0011014938354492188,
0.0303192138671875,
0.0274200439453125,
-0.039306640625,
-0.01378631591796875,
-0.023300170898437... |
pucpr/clinicalnerpt-disorder | 2021-10-13T09:32:51.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | pucpr | null | null | pucpr/clinicalnerpt-disorder | 5 | 521 | transformers | 2022-03-02T23:29:05 | ---
language: "pt"
widget:
- text: "PACIENTE DE 69 ANOS COM ICC DE ETIOLOGIA ISQUÊMICA "
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Disorder
The Disorder NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
| 3,334 | [
[
-0.0133209228515625,
-0.04986572265625,
0.036346435546875,
0.0231781005859375,
-0.0206298828125,
-0.00315093994140625,
-0.020843505859375,
-0.055267333984375,
0.02789306640625,
0.03485107421875,
-0.0079498291015625,
-0.052825927734375,
-0.06024169921875,
0.0... |
sonoisa/t5-base-japanese-v1.1 | 2022-08-27T09:21:01.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"seq2seq",
"ja",
"dataset:wikipedia",
"dataset:oscar",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | sonoisa | null | null | sonoisa/t5-base-japanese-v1.1 | 7 | 521 | transformers | 2022-08-12T15:41:22 | ---
language:
- ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
datasets:
- wikipedia
- oscar
- cc100
---
# 日本語T5事前学習済みモデル
This is a T5 (Text-to-Text Transfer Transformer) model pretrained on Japanese corpus.
次の日本語コーパス(約100GB)を用いて事前学習を行ったT5 (Text-to-Text Transfer Transformer) [v1.1アーキテクチャ](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md)のモデルです。
* [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2022年6月27日時点のもの)
* [OSCAR](https://oscar-corpus.com)の日本語コーパス
* [CC-100](http://data.statmt.org/cc-100/)の日本語コーパス
このモデルは事前学習のみを行なったものであり、特定のタスクに利用するにはファインチューニングする必要があります。
本モデルにも、大規模コーパスを用いた言語モデルにつきまとう、学習データの内容の偏りに由来する偏った(倫理的ではなかったり、有害だったり、バイアスがあったりする)出力結果になる問題が潜在的にあります。
この問題が発生しうることを想定した上で、被害が発生しない用途にのみ利用するよう気をつけてください。
SentencePieceトークナイザーの学習には、上記WikipediaとCC-100を約10:1の比率で混ぜたデータを用いました。byte-fallbackあり設定で学習しており、実質未知語が発生しません。
# 転移学習のサンプルコード
https://github.com/sonoisa/t5-japanese
# ベンチマーク
準備中
# 免責事項
本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。
# ライセンス
[CC-BY SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)
[Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。
| 1,383 | [
[
-0.032806396484375,
-0.048583984375,
0.0401611328125,
0.022247314453125,
-0.044830322265625,
0.004058837890625,
0.0019283294677734375,
-0.03778076171875,
0.0246429443359375,
0.0209197998046875,
-0.049957275390625,
-0.047515869140625,
-0.05316162109375,
0.009... |
allenai/entailer-large | 2023-01-24T17:19:13.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | allenai | null | null | allenai/entailer-large | 2 | 521 | transformers | 2022-10-19T23:08:56 | ---
language: en
widget:
- text: $proof$ ; $hypothesis$ = a magnet will not attract a penny
license: apache-2.0
---
# entailer-large
## Model description
Entailer is a text-to-text model trained to create entailment-style explanations for a hypothesis
(following the format of [EntailmentBank](https://allenai.org/data/entailmentbank)), as well as verifying both the reasoning and the factuality of the premises.
Entailer was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in
two sizes: [entailer-11b](https://huggingface.co/allenai/entailer-11b) and
[entailer-large](https://huggingface.co/allenai/entailer-large).
See https://github.com/allenai/entailment_bank for more details.
| 742 | [
[
-0.0318603515625,
-0.06781005859375,
0.050994873046875,
0.0169525146484375,
-0.006229400634765625,
-0.01044464111328125,
-0.0030078887939453125,
-0.01561737060546875,
0.035003662109375,
0.042236328125,
-0.041412353515625,
-0.0195465087890625,
-0.051666259765625,... |
google/mobilenet_v2_1.4_224 | 2023-05-16T16:38:32.000Z | [
"transformers",
"pytorch",
"mobilenet_v2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | google | null | null | google/mobilenet_v2_1.4_224 | 1 | 521 | transformers | 2022-11-10T16:03:54 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.4\_224**, where **1.4** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_1.4_224")
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.4_224")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
| 3,315 | [
[
-0.034027099609375,
-0.0161590576171875,
-0.0189056396484375,
-0.00461578369140625,
-0.023193359375,
-0.026824951171875,
0.01953125,
-0.05572509765625,
0.0257568359375,
0.0308990478515625,
-0.0233612060546875,
-0.01113128662109375,
-0.04437255859375,
-0.0211... |
reazon-research/reazonspeech-espnet-next | 2023-03-29T07:35:13.000Z | [
"espnet",
"automatic-speech-recognition",
"ja",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | reazon-research | null | null | reazon-research/reazonspeech-espnet-next | 15 | 521 | espnet | 2023-03-29T07:20:03 | ---
license: apache-2.0
language:
- ja
library_name: espnet
tags:
- automatic-speech-recognition
---
# reazonspeech-espnet-next
ReazonSpeech is a project to maintain freely-available Japanese audio
datasets and ML models.
`reazonspeech-espnet-next` is a "bleeding-edge" repository that contains
latest ASR models trained by ReazonSpeech team.
We maintain this repository because we want to make our latest research
results readily available, and try to incorporate feedback from community
as quickly as possible.
If you are curious about the project, or have built something based on
ReazonSpeech model, please let us know!
Project site: https://research.reazon.jp/projects/ReazonSpeech/index.html
| 706 | [
[
-0.032012939453125,
-0.0235748291015625,
-0.0044708251953125,
0.021026611328125,
-0.01378631591796875,
-0.0064697265625,
-0.013885498046875,
-0.03076171875,
0.029632568359375,
0.04840087890625,
-0.07598876953125,
-0.0523681640625,
-0.048797607421875,
0.00627... |
Hemlok/MandarinMix | 2023-04-20T22:45:14.000Z | [
"diffusers",
"stable-diffusion",
"text-to-image",
"art",
"ja",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Hemlok | null | null | Hemlok/MandarinMix | 25 | 521 | diffusers | 2023-04-20T20:48:08 | ---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
- text-to-image
- art
library_name: diffusers
---
# ◆MandarinMix

- "Defacta3th"ベースのマージモデルです。
- 背景やエフェクトに強いモデルです。
----
# ◆Discord
[Join Discord Server](https://discord.gg/eN6aSWRddT)
- Hemlokのマージコミュニティです。レシピとか裏話はこちら。
----
# ◆モデル概要
- [English Readme]()
- Sampler: DDIM or DPM++ SDE Karras 推奨。
- Steps: 40~
- Clipskip: 2
- CFG Scale: 5~8。
- Denoise strength: 0.4-0.6
- "EasyNegative" 推奨です。
- クオリティタグ(masterpiece,best quality等)は入れなくても大丈夫です。お好みでどうぞ。
----
# ◆サンプル

----
## ◆MandarinMix

- Prompt:
```
(masterpiece, best quality:1.1), 1girl, solo, teen, in room, (morning),
```
---
## ◆MandarinMix V2

- Prompt:
```
(masterpiece, best quality:1.1), 1girl, solo, teen, in room, (morning),
```
---
## ◆MandarinMi-EX

- Prompt:
```
(masterpiece, best quality:1.1), 1girl, solo, teen, in room, (morning),
```
- 自作LoRaでの微調整モデル。
---
# ◆モデルの使い方
- モデルをダウンロードしてWebUI等でご使用ください。
- モデルはModelフォルダの中にあります。
- VAEは不要ですが使ってもらっても大丈夫です。
----
## 🧨Diffusers
- Diffusersを使用する際は以下のコードを利用してください。
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Hemlok/MandarinMix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "One girl standing by the window"
image = pipe(prompt).images[0]
image.save("test.png")
```
----
# 免責事項
- SFWおよびNSFW画像の作成は、個々のクリエイターの判断によります。モデル製作者は責任を負いません。
- このモデルは、公共の場などでNSFWコンテンツを公開するために作られたモデルではありません。
----
# ライセンス
- このモデルはオープンアクセスで誰でも利用可能であり、CreativeML OpenRAIL-Mライセンスでさらに権利と使用方法が規定されています。
- CreativeML OpenRAILライセンスでは、次のように規定されています。
1. このモデルを使用して、違法または有害な出力やコンテンツを意図的に作成したり、共有したりすることはできません。
2. 作者はあなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用することができますが、ライセンスで定められた規定を守ってください。利用は自己責任でお願いします。
3. あなたはウェイトを再配布し、モデルを商業的またはサービスとして使用することができます。その場合、ライセンスにあるものと同じ使用制限を含め、CreativeML OpenRAIL-Mのコピーをあなたのすべてのユーザーに共有しなければならないことに注意してください(ライセンスを完全にかつ注意深く読んでください)。
- (ライセンスの全文: [https://huggingface.co/spaces/CompVis/stable-diffusion-license](https://huggingface.co/spaces/CompVis/stable-diffusion-license)) | 2,178 | [
[
-0.040618896484375,
-0.06097412109375,
0.0254669189453125,
0.03692626953125,
-0.038970947265625,
-0.0101318359375,
0.0027027130126953125,
-0.02197265625,
0.024566650390625,
0.025177001953125,
-0.0604248046875,
-0.0576171875,
-0.03839111328125,
0.007541656494... |
digiplay/hellopure_v2.23 | 2023-10-24T06:14:37.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/hellopure_v2.23 | 0 | 521 | diffusers | 2023-07-04T03:11:30 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/88202?modelVersionId=104921
Author's info:
https://civitai.com/user/aji1/models
| 243 | [
[
-0.0181121826171875,
-0.00458526611328125,
0.048248291015625,
0.031646728515625,
-0.0098724365234375,
-0.02447509765625,
0.043212890625,
-0.023193359375,
0.025360107421875,
0.046051025390625,
-0.0540771484375,
-0.0037746429443359375,
-0.00202178955078125,
-0... |
bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7 | 2023-08-03T05:30:53.000Z | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | text-to-image | bharadwajkg | null | null | bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7 | 0 | 521 | diffusers | 2023-08-02T16:59:15 |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the bharadwajkg/planogram-sd-data7 dataset. You can find some example images in the following.




| 577 | [
[
-0.0218963623046875,
-0.056640625,
0.00922393798828125,
0.0274505615234375,
-0.042388916015625,
-0.028228759765625,
0.0079498291015625,
-0.01198577880859375,
0.041351318359375,
0.057373046875,
-0.03802490234375,
-0.03778076171875,
-0.04656982421875,
-0.01831... |
timm/volo_d5_448.sail_in1k | 2023-04-13T06:11:53.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.13112",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/volo_d5_448.sail_in1k | 0 | 520 | timm | 2023-04-13T06:07:47 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for volo_d5_448.sail_in1k
A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 295.9
- GMACs: 315.1
- Activations (M): 737.9
- Image size: 448 x 448
- **Papers:**
- VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/sail-sg/volo
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('volo_d5_448.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'volo_d5_448.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{yuan2022volo,
title={Volo: Vision outlooker for visual recognition},
author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}
```
| 2,601 | [
[
-0.0304107666015625,
-0.01409912109375,
0.00725555419921875,
0.01953125,
-0.0440673828125,
-0.029876708984375,
0.0006194114685058594,
-0.027740478515625,
0.0216064453125,
0.038482666015625,
-0.051513671875,
-0.048980712890625,
-0.0526123046875,
-0.0018196105... |
Daniil-plotnikov/russian-vision-v5-beta | 2023-07-13T16:37:04.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"ru",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | Daniil-plotnikov | null | null | Daniil-plotnikov/russian-vision-v5-beta | 0 | 520 | diffusers | 2023-07-12T22:47:22 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- ru
- en
---
Модель основанная на Stable Deffusion, которая понимает русский язык! На базовом уровне , но понимает! | 202 | [
[
-0.01190948486328125,
-0.047698974609375,
0.026641845703125,
-0.0089111328125,
-0.040802001953125,
0.0169525146484375,
0.0282745361328125,
0.01151275634765625,
0.03985595703125,
0.03228759765625,
-0.04498291015625,
-0.029022216796875,
-0.0196685791015625,
0.... |
Dinesh-2004/my-pet-dog | 2023-07-15T11:48:14.000Z | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | text-to-image | Dinesh-2004 | null | null | Dinesh-2004/my-pet-dog | 0 | 520 | diffusers | 2023-07-15T11:44:32 | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Dinesh-2004 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU107
Sample pictures of this concept:
| 292 | [
[
-0.060150146484375,
-0.0153961181640625,
0.029327392578125,
0.00731658935546875,
-0.011962890625,
0.02001953125,
0.0347900390625,
-0.0276336669921875,
0.0300140380859375,
0.023956298828125,
-0.038909912109375,
-0.012786865234375,
-0.00037026405334472656,
0.0... |
TIGER-Lab/MAmmoTH-7B | 2023-10-23T02:59:20.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:TIGER-Lab/MathInstruct",
"arxiv:2309.05653",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | TIGER-Lab | null | null | TIGER-Lab/MAmmoTH-7B | 5 | 520 | transformers | 2023-09-10T20:51:23 | ---
license: mit
datasets:
- TIGER-Lab/MathInstruct
language:
- en
---
# 🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH/](https://tiger-ai-lab.github.io/MAmmoTH/)
Paper: [https://arxiv.org/pdf/2309.05653.pdf](https://arxiv.org/pdf/2309.05653.pdf)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Introduction
We introduce 🦣 MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
| | **Base Model: Llama-2** | **Base Model: Code Llama** |
|-----|---------------------------------------------------------------|--------------------------------------------------------------------------|
| 7B | 🦣 [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 🦣 [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) |
| 13B | 🦣 [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | 🦣 [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)|
| 34B | - | 🦣 [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)|
| 70B | 🦣 [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - |
|
## Training Data
The models are trained on the 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), which is compiled from 13 different math rationale datasets. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the MathInstruct dataset using the original Llama-2 and Code Llama models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **Decoding** | **GSM** | **MATH** | **AQuA** | **NumG** | **SVA** | **Mat** | **Sim** | **SAT** | **MMLU** | **AVG** |
|-----------------------|--------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **MAmmoTH-7B** | CoT | 50.5 | 10.4 | 43.7 | 44.0 | 47.3 | 9.2 | 18.9 | 32.7 | 39.9 | 33.0 |
| | PoT | 51.6 | 28.7 | 43.3 | 52.3 | 65.1 | 41.9 | 48.2 | 39.1 | 44.6 | 46.1 |
| | **Hybrid** | **53.6** | **31.5** | **44.5** | **61.2** | **67.7** | **46.3** | **41.2** | **42.7** | **42.6** | **47.9** |
| **MAmmoTH-Coder-7B** | CoT | 22.4 | 7.9 | 36.2 | 36.0 | 37.0 | 8.2 | 7.2 | 32.7 | 34.6 | 24.7 |
| | PoT | 58.8 | 32.1 | 47.2 | 57.1 | 71.1 | 53.9 | 44.6 | 40.0 | 47.8 | 50.3 |
| | **Hybrid** | **59.4** | **33.4** | **47.2** | **66.4** | **71.4** | **55.4** | **45.9** | **40.5** | **48.3** | **52.0** |
| **MAmmoTH-13B** | CoT | 56.3 | 12.9 | 45.3 | 45.6 | 53.8 | 11.7 | 22.4 | 43.6 | 42.3 | 37.1 |
| | PoT | 61.3 | 32.6 | 48.8 | 59.6 | 72.2 | 48.5 | 40.3 | 46.8 | 45.4 | 50.6 |
| | **Hybrid** | **62.0** | **34.2** | **51.6** | **68.7** | **72.4** | **49.2** | **43.2** | **46.8** | **47.6** | **52.9** |
| **MAmmoTH-Coder-13B** | CoT | 32.1 | 10.2 | 40.6 | 36.2 | 43.0 | 9.6 | 10.1 | 40.9 | 36.6 | 28.8 |
| | PoT | 64.3 | 35.2 | 46.8 | 54.2 | 73.2 | 60.0 | 44.2 | 48.2 | 48.2 | 52.7 |
| | **Hybrid** | **64.7** | **36.3** | **46.9** | **66.8** | **73.7** | **61.5** | **47.1** | **48.6** | **48.3** | **54.9** |
| **MAmmoTH-Coder-33B** | CoT | 34.3 | 11.6 | 39.0 | 36.2 | 44.6 | 10.8 | 10.9 | 46.4 | 42.9 | 30.7 |
| | PoT | 72.3 | 42.8 | 53.8 | 59.6 | 84.0 | 64.7 | 50.6 | 58.6 | 52.7 | 59.9 |
| | **Hybrid** | **72.7** | **43.6** | **54.7** | **71.6** | **84.3** | **65.4** | **51.8** | **60.9** | **53.8** | **62.1** |
| **MAmmoTH-70B** | CoT | 72.4 | 21.1 | 57.9 | 58.9 | 71.6 | 20.0 | 31.9 | 57.3 | 52.1 | 49.2 |
| | PoT | 76.7 | 40.1 | 60.2 | 64.3 | 81.7 | 55.3 | 45.3 | 64.1 | 53.5 | 60.1 |
| | **Hybrid** | **76.9** | **41.8** | **65.0** | **74.4** | **82.4** | **55.6** | **51.4** | **66.4** | **56.7** | **63.4** |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Prompt Format
If you want to do CoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
If you want to do PoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction} Let's write a program.
### Response:
```
## Intended Uses
These models are trained for research purposes. They are designed to solve general math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed. The models can generate both a chain of thought (CoT) rationale and a program of thought (PoT) rationale, providing a comprehensive solution to a given math problem.
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
``` | 7,685 | [
[
-0.0487060546875,
-0.04913330078125,
0.0129241943359375,
0.007595062255859375,
-0.0188751220703125,
0.0151214599609375,
-0.0015382766723632812,
-0.0181732177734375,
0.0357666015625,
0.0321044921875,
-0.046173095703125,
-0.0303802490234375,
-0.0491943359375,
... |
TheBloke/Euryale-1.3-L2-70B-GPTQ | 2023-10-18T13:25:36.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Euryale-1.3-L2-70B-GPTQ | 3 | 520 | transformers | 2023-10-18T09:22:19 | ---
base_model: Sao10K/Euryale-1.3-L2-70B
inference: false
language:
- en
license: llama2
model_creator: Saofiq
model_name: Euryale 1.3 L2 70B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Euryale 1.3 L2 70B - GPTQ
- Model creator: [Saofiq](https://huggingface.co/Sao10K)
- Original model: [Euryale 1.3 L2 70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Saofiq's Euryale 1.3 L2 70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF)
* [Saofiq's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 31.84 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Euryale-1.3-L2-70B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Euryale-1.3-L2-70B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Euryale-1.3-L2-70B-GPTQ`:
```shell
mkdir Euryale-1.3-L2-70B-GPTQ
huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GPTQ --local-dir Euryale-1.3-L2-70B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Euryale-1.3-L2-70B-GPTQ
huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Euryale-1.3-L2-70B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Euryale-1.3-L2-70B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GPTQ --local-dir Euryale-1.3-L2-70B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Euryale-1.3-L2-70B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Euryale-1.3-L2-70B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Euryale-1.3-L2-70B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Euryale-1.3-L2-70B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Euryale-1.3-L2-70B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Saofiq's Euryale 1.3 L2 70B

17th Attempt. Past 10 Failed, cost me >$200 lol.
Idea is an updated version of Euryale with ReMantik instead of the ties-merge between the original 3 models.
This is then mixed with a saucy model with a Mythomax-esque Ratio, and a certain experimental (self) LoRA applied to it.
Test Results: Works Well.
<br>NSFL and NSFW fine in roleplay context.
<br>slight censor with 0 context, zero issues in actual RP / ERP.
<br>Good Prose, Not Dumbed Down due to RP merges from testing.
<br> I have not encountered any repetition issues some had with the original Euryale. tell me if you do, though.
Prompt and System Format:
most works well. I recommend Alpaca.
ST Settings used for Test:
Lightning 1.1 System Prompt + Shortwave(1.2 Temperature)
Support me [here](https://ko-fi.com/sao10k) :)
| 19,765 | [
[
-0.044921875,
-0.0548095703125,
0.006504058837890625,
0.0143280029296875,
-0.01837158203125,
-0.01013946533203125,
0.007045745849609375,
-0.03662109375,
0.024627685546875,
0.03082275390625,
-0.04644775390625,
-0.03662109375,
-0.0259246826171875,
0.0014314651... |
symanto/xlm-roberta-base-snli-mnli-anli-xnli | 2023-02-20T09:54:38.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"ar",
"bg",
"de",
"el",
"en",
"es",
"fr",
"ru",
"th",
"tr",
"ur",
"vn",
"zh",
"multilingual",
"dataset:SNLI",
"dataset:MNLI",
"dataset:ANLI",
"dataset:XNLI",
"endpoints_compatib... | text-classification | symanto | null | null | symanto/xlm-roberta-base-snli-mnli-anli-xnli | 4 | 519 | transformers | 2022-03-02T23:29:05 | ---
language:
- ar
- bg
- de
- el
- en
- es
- fr
- ru
- th
- tr
- ur
- vn
- zh
- multilingual
tags:
- zero-shot-classification
datasets:
- SNLI
- MNLI
- ANLI
- XNLI
---
A cross-attention NLI model trained for zero-shot and few-shot text classification.
The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), trained with the code from [here](https://github.com/facebookresearch/anli);
on [SNLI](https://nlp.stanford.edu/projects/snli/), [MNLI](https://cims.nyu.edu/~sbowman/multinli/), [ANLI](https://github.com/facebookresearch/anli) and [XNLI](https://github.com/facebookresearch/XNLI).
Usage:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
import numpy as np
model = AutoModelForSequenceClassification.from_pretrained("symanto/xlm-roberta-base-snli-mnli-anli-xnli")
tokenizer = AutoTokenizer.from_pretrained("symanto/xlm-roberta-base-snli-mnli-anli-xnli")
input_pairs = [
("I like this pizza.", "The sentence is positive."),
("I like this pizza.", "The sentence is negative."),
("I mag diese Pizza.", "Der Satz ist positiv."),
("I mag diese Pizza.", "Der Satz ist negativ."),
("Me gusta esta pizza.", "Esta frase es positivo."),
("Me gusta esta pizza.", "Esta frase es negativo."),
]
inputs = tokenizer(input_pairs, truncation="only_first", return_tensors="pt", padding=True)
logits = model(**inputs).logits
probs = torch.softmax(logits, dim=1)
probs = probs[..., [0]].tolist()
print("probs", probs)
np.testing.assert_almost_equal(probs, [[0.83], [0.04], [1.00], [0.00], [1.00], [0.00]], decimal=2)
```
| 1,678 | [
[
-0.002742767333984375,
-0.045135498046875,
0.0260467529296875,
0.0100555419921875,
-0.0055694580078125,
-0.020111083984375,
-0.00628662109375,
-0.0140533447265625,
0.01038360595703125,
0.0218505859375,
-0.03668212890625,
-0.0496826171875,
-0.044189453125,
0.... |
timm/seresnet152d.ra2_in1k | 2023-04-05T19:32:44.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"arxiv:1709.01507",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/seresnet152d.ra2_in1k | 0 | 519 | timm | 2023-04-05T19:31:52 | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
---
# Model card for seresnet152d.ra2_in1k
A SE-ResNet-D image classification model with Squeeze-and-Excitation channel attention.
This model features:
* ReLU activations
* 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
* Squeeze-and-Excitation channel attention
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 66.8
- GMACs: 15.4
- Activations (M): 30.6
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Squeeze-and-Excitation Networks: https://arxiv.org/abs/1709.01507
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('seresnet152d.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'seresnet152d.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'seresnet152d.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@inproceedings{hu2018senet,
title={Squeeze-and-Excitation Networks},
author={Jie Hu and Li Shen and Gang Sun},
journal={IEEE Conference on Computer Vision and Pattern Recognition},
year={2018}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
| 39,482 | [
[
-0.062347412109375,
-0.0196075439453125,
0.0028896331787109375,
0.02740478515625,
-0.0308837890625,
-0.01145172119140625,
-0.0106353759765625,
-0.0330810546875,
0.0804443359375,
0.0218048095703125,
-0.0465087890625,
-0.03759765625,
-0.050018310546875,
-0.002... |
TheBloke/NexusRaven-13B-GPTQ | 2023-09-29T16:18:51.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2308.12950",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/NexusRaven-13B-GPTQ | 5 | 519 | transformers | 2023-09-28T23:00:55 | ---
base_model: Nexusflow/NexusRaven-13B
inference: false
license: llama2
model-index:
- name: NexusRaven-13B
results: []
model_creator: Nexusflow
model_name: Nexusraven 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nexusraven 13B - GPTQ
- Model creator: [Nexusflow](https://huggingface.co/Nexusflow)
- Original model: [Nexusraven 13B](https://huggingface.co/Nexusflow/NexusRaven-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Nexusflow's Nexusraven 13B](https://huggingface.co/Nexusflow/NexusRaven-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NexusRaven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NexusRaven-13B-GGUF)
* [Nexusflow's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Nexusflow/NexusRaven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 14.55 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/NexusRaven-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/NexusRaven-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `NexusRaven-13B-GPTQ`:
```shell
mkdir NexusRaven-13B-GPTQ
huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir NexusRaven-13B-GPTQ
huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir NexusRaven-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/NexusRaven-13B-GPTQ --local-dir NexusRaven-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/NexusRaven-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/NexusRaven-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/NexusRaven-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `NexusRaven-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/NexusRaven-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Nexusflow's Nexusraven 13B
# NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs.
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="http://nexusflow.ai/blog" target="_blank">NexusRaven blog post</a> - <a href="https://huggingface.co/Nexusflow/NexusRaven-13B" target="_blank">NexusRaven-13B</a> - <a href="https://x.com/NexusflowX/status/1707470614012035561?s=20" target="_blank">NexusRaven-13B Twitter Thread</a> - <a href="https://github.com/nexusflowai/NexusRaven/" target="_blank">NexusRaven-13B Github</a> - <a href="https://huggingface.co/datasets/Nexusflow/NexusRaven_API_evaluation" target="_blank">NexusRaven API evaluation dataset</a>
</p>
<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Table of contents
- [NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs.](#nexusraven-13b-surpassing-the-state-of-the-art-in-open-source-function-calling-llms)
- [Introducing NexusRaven-13B](#introducing-nexusraven-13b)
- [NexusRaven model usage](#nexusraven-model-usage)
- [Training procedure](#training-procedure)
- [Training hyperparameters](#training-hyperparameters)
- [Framework versions](#framework-versions)
- [Limitations](#limitations)
- [License](#license)
- [References](#references)
- [Citation](#citation)
- [Contact](#contact)
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf).
## Introducing NexusRaven-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities.
📊 Performance Highlights: With our demonstration retrieval system, NexusRaven-13B achieves a 95% success rate in using cybersecurity tools such as CVE/CPE Search and VirusTotal, while prompting GPT-4 achieves 64%. It has significantly lower cost and faster inference speed compared to GPT-4.
🔧 Generalization to the Unseen: NexusRaven-13B generalizes to tools never seen during model training, achieving a success rate comparable with GPT-3.5 in zero-shot setting, significantly outperforming all other open-source LLMs of similar sizes.
🔥 Commercially Permissive: The training of NexusRaven-13B does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.
<p align="center" width="100%">
<a><img src="Retrieval-augmented_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="Zero-shot_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## NexusRaven model usage
NexusRaven accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call.
NexusRaven is highly compatible with langchain. See [langchain_example.py](https://huggingface.co/Nexusflow/NexusRaven-13B/blob/main/langchain_example.py). An example without langchain can be found in [non_langchain_example.py](https://huggingface.co/Nexusflow/NexusRaven-13B/blob/main/non_langchain_example.py)
Please note that the model will reflect on the answer sometimes, so we highly recommend stopping the model generation at a stopping criteria of `["\nReflection:"]`, to avoid spending unnecessary tokens during inference, but the reflection might help in some rare cases. This is reflected in our langchain example.
The "Initial Answer" can be executed to run the function.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 2.0
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# Limitations
1. We highly recommend using a stop criteria of `["\nReflection:"]`. The model was trained to first generate an answer and then reflect on its answer to either improve the answer or keep the answer the same. However, this "chain of thought" is often not helpful, and the final answer is seldom better than the initial call. Therefore, we strongly recommend using the Initial Call as the main call to execute.
2. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model.
3. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place.
## License
This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf/) model.
## References
We thank the CodeLlama team for their amazing models!
```
@misc{rozière2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven: Surpassing the state-of-the-art in open-source function calling LLMs},
author={Nexusflow.ai team},
year={2023},
url={http://nexusflow.ai/blog}
}
```
## Contact
Please reach out to info@nexusflow.ai for any questions!
| 23,259 | [
[
-0.036773681640625,
-0.050567626953125,
0.019561767578125,
0.01251983642578125,
-0.025543212890625,
-0.007350921630859375,
0.007717132568359375,
-0.0273590087890625,
0.0189208984375,
0.0277099609375,
-0.051239013671875,
-0.035614013671875,
-0.024627685546875,
... |
LTP/base1 | 2022-09-19T06:36:16.000Z | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | LTP | null | null | LTP/base1 | 2 | 518 | transformers | 2022-08-14T04:35:09 | 


| Language | version |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Python](python/interface/README.md) | [](https://pypi.org/project/ltp) [](https://pypi.org/project/ltp-core) [](https://pypi.org/project/ltp-extension) |
| [Rust](rust/ltp/README.md) | [](https://crates.io/crates/ltp) |
# LTP 4
LTP(Language Technology Platform) 提供了一系列中文自然语言处理工具,用户可以使用这些工具对于中文文本进行分词、词性标注、句法分析等等工作。
## 引用
如果您在工作中使用了 LTP,您可以引用这篇论文
```bibtex
@article{che2020n,
title={N-LTP: A Open-source Neural Chinese Language Technology Platform with Pretrained Models},
author={Che, Wanxiang and Feng, Yunlong and Qin, Libo and Liu, Ting},
journal={arXiv preprint arXiv:2009.11616},
year={2020}
}
```
**参考书:**
由哈工大社会计算与信息检索研究中心(HIT-SCIR)的多位学者共同编著的《[自然语言处理:基于预训练模型的方法](https://item.jd.com/13344628.html)
》(作者:车万翔、郭江、崔一鸣;主审:刘挺)一书现已正式出版,该书重点介绍了新的基于预训练模型的自然语言处理技术,包括基础知识、预训练词向量和预训练模型三大部分,可供广大LTP用户学习参考。
### 更新说明
- 4.2.0
- \[结构性变化\] 将 LTP 拆分成 2 个部分,维护和训练更方便,结构更清晰
- \[Legacy 模型\] 针对广大用户对于**推理速度**的需求,使用 Rust 重写了基于感知机的算法,准确率与 LTP3 版本相当,速度则是 LTP v3 的 **3.55** 倍,开启多线程更可获得 **17.17** 倍的速度提升,但目前仅支持分词、词性、命名实体三大任务
- \[深度学习模型\] 即基于 PyTorch 实现的深度学习模型,支持全部的6大任务(分词/词性/命名实体/语义角色/依存句法/语义依存)
- \[其他改进\] 改进了模型训练方法
- \[共同\] 提供了训练脚本和训练样例,使得用户能够更方便地使用私有的数据,自行训练个性化的模型
- \[深度学习模型\] 采用 hydra 对训练过程进行配置,方便广大用户修改模型训练参数以及对 LTP 进行扩展(比如使用其他包中的 Module)
- \[其他变化\] 分词、依存句法分析 (Eisner) 和 语义依存分析 (Eisner) 任务的解码算法使用 Rust 实现,速度更快
- \[新特性\] 模型上传至 [Huggingface Hub](https://huggingface.co/LTP),支持自动下载,下载速度更快,并且支持用户自行上传自己训练的模型供LTP进行推理使用
- \[破坏性变更\] 改用 Pipeline API 进行推理,方便后续进行更深入的性能优化(如SDP和SDPG很大一部分是重叠的,重用可以加快推理速度),使用说明参见[Github快速使用部分](https://github.com/hit-scir/ltp)
- 4.1.0
- 提供了自定义分词等功能
- 修复了一些bug
- 4.0.0
- 基于Pytorch 开发,原生 Python 接口
- 可根据需要自由选择不同速度和指标的模型
- 分词、词性、命名实体、依存句法、语义角色、语义依存6大任务
## 快速使用
### [Python](python/interface/README.md)
```bash
pip install -U ltp ltp-core ltp-extension -i https://pypi.org/simple # 安装 ltp
```
**注:** 如果遇到任何错误,请尝试使用上述命令重新安装 ltp,如果依然报错,请在 Github issues 中反馈。
```python
import torch
from ltp import LTP
ltp = LTP("LTP/small") # 默认加载 Small 模型
# 将模型移动到 GPU 上
if torch.cuda.is_available():
# ltp.cuda()
ltp.to("cuda")
output = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner", "srl", "dep", "sdp"])
# 使用字典格式作为返回结果
print(output.cws) # print(output[0]) / print(output['cws']) # 也可以使用下标访问
print(output.pos)
print(output.sdp)
# 使用感知机算法实现的分词、词性和命名实体识别,速度比较快,但是精度略低
ltp = LTP("LTP/legacy")
# cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "ner"]).to_tuple() # error: NER 需要 词性标注任务的结果
cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner"]).to_tuple() # to tuple 可以自动转换为元组格式
# 使用元组格式作为返回结果
print(cws, pos, ner)
```
**[详细说明](python/interface/docs/quickstart.rst)**
### [Rust](rust/ltp/README.md)
```rust
use std::fs::File;
use itertools::multizip;
use ltp::{CWSModel, POSModel, NERModel, ModelSerde, Format, Codec};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let file = File::open("data/legacy-models/cws_model.bin")?;
let cws: CWSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let file = File::open("data/legacy-models/pos_model.bin")?;
let pos: POSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let file = File::open("data/legacy-models/ner_model.bin")?;
let ner: NERModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let words = cws.predict("他叫汤姆去拿外衣。")?;
let pos = pos.predict(&words)?;
let ner = ner.predict((&words, &pos))?;
for (w, p, n) in multizip((words, pos, ner)) {
println!("{}/{}/{}", w, p, n);
}
Ok(())
}
```
## 模型性能以及下载地址
| 深度学习模型 | 分词 | 词性 | 命名实体 | 语义角色 | 依存句法 | 语义依存 | 速度(句/S) |
| :---------------------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :-----: |
| [Base](https://huggingface.co/LTP/base) | 98.7 | 98.5 | 95.4 | 80.6 | 89.5 | 75.2 | 39.12 |
| [Base1](https://huggingface.co/LTP/base1) | 99.22 | 98.73 | 96.39 | 79.28 | 89.57 | 76.57 | --.-- |
| [Base2](https://huggingface.co/LTP/base2) | 99.18 | 98.69 | 95.97 | 79.49 | 90.19 | 76.62 | --.-- |
| [Small](https://huggingface.co/LTP/small) | 98.4 | 98.2 | 94.3 | 78.4 | 88.3 | 74.7 | 43.13 |
| [Tiny](https://huggingface.co/LTP/tiny) | 96.8 | 97.1 | 91.6 | 70.9 | 83.8 | 70.1 | 53.22 |
| 感知机算法 | 分词 | 词性 | 命名实体 | 速度(句/s) | 备注 |
| :-----------------------------------------: | :---: | :---: | :---: | :------: | :------------------------: |
| [Legacy](https://huggingface.co/LTP/legacy) | 97.93 | 98.41 | 94.28 | 21581.48 | [性能详情](rust/ltp/README.md) |
**注:感知机算法速度为开启16线程速度**
## 构建 Wheel 包
```shell script
make bdist
```
## 其他语言绑定
**感知机算法**
- [Rust](rust/ltp)
- [C/C++](rust/ltp-cffi)
**深度学习算法**
- [Rust](https://github.com/HIT-SCIR/libltp/tree/master/ltp-rs)
- [C++](https://github.com/HIT-SCIR/libltp/tree/master/ltp-cpp)
- [Java](https://github.com/HIT-SCIR/libltp/tree/master/ltp-java)
## 作者信息
- 冯云龙 \<\<[ylfeng@ir.hit.edu.cn](mailto:ylfeng@ir.hit.edu.cn)>>
## 开源协议
1. 语言技术平台面向国内外大学、中科院各研究所以及个人研究者免费开放源代码,但如上述机构和个人将该平台用于商业目的(如企业合作项目等)则需要付费。
2. 除上述机构以外的企事业单位,如申请使用该平台,需付费。
3. 凡涉及付费问题,请发邮件到 car@ir.hit.edu.cn 洽商。
4. 如果您在 LTP 基础上发表论文或取得科研成果,请您在发表论文和申报成果时声明“使用了哈工大社会计算与信息检索研究中心研制的语言技术平台(LTP)”.
同时,发信给car@ir.hit.edu.cn,说明发表论文或申报成果的题目、出处等。
| 6,719 | [
[
-0.03369140625,
-0.045745849609375,
0.024383544921875,
0.017578125,
-0.0214996337890625,
0.006694793701171875,
-0.00919342041015625,
-0.027496337890625,
0.0278778076171875,
0.01068115234375,
-0.022125244140625,
-0.034149169921875,
-0.0396728515625,
-0.001845... |
timm/convit_small.fb_in1k | 2023-04-24T04:14:51.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.10697",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/convit_small.fb_in1k | 0 | 518 | timm | 2023-04-24T04:14:33 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convit_small.fb_in1k
A ConViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 27.8
- GMACs: 5.8
- Activations (M): 17.9
- Image size: 224 x 224
- **Papers:**
- ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases: https://arxiv.org/abs/2103.10697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/convit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convit_small.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convit_small.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 432) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{d2021convit,
title={ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases},
author={d'Ascoli, St{'e}phane and Touvron, Hugo and Leavitt, Matthew and Morcos, Ari and Biroli, Giulio and Sagun, Levent},
journal={arXiv preprint arXiv:2103.10697},
year={2021}
}
```
| 2,789 | [
[
-0.039276123046875,
-0.032257080078125,
-0.00664520263671875,
0.00537109375,
-0.03045654296875,
-0.035186767578125,
-0.0298309326171875,
-0.02569580078125,
0.0043792724609375,
0.021026611328125,
-0.048370361328125,
-0.04656982421875,
-0.043365478515625,
-0.0... |
liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3 | 2023-08-27T05:44:55.000Z | [
"transformers",
"pytorch",
"llava",
"text-generation",
"region:us"
] | text-generation | liuhaotian | null | null | liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3 | 3 | 518 | transformers | 2023-08-27T05:35:38 | ---
inference: false
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-LCS558K-ScienceQA-Vicuna-13B-v1.3 was trained in August 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Non-commerical Use.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- ScienceQA training set.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
See https://llava-vl.github.io/ for more details. | 1,544 | [
[
-0.007110595703125,
-0.07452392578125,
0.0345458984375,
0.0172576904296875,
-0.03179931640625,
0.0106201171875,
0.00360870361328125,
-0.03875732421875,
0.01428985595703125,
0.04510498046875,
-0.038818359375,
-0.04400634765625,
-0.0338134765625,
-0.0008964538... |
Andyrasika/Sdxl1.0-protvisionXL | 2023-09-01T16:03:12.000Z | [
"diffusers",
"sdxl1.0",
"text-to-image",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | Andyrasika | null | null | Andyrasika/Sdxl1.0-protvisionXL | 4 | 518 | diffusers | 2023-08-31T00:43:01 | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- sdxl1.0
---
SDXL_v1.0-ProtVisionXL
[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
```
import torch
from diffusers import StableDiffusionXLPipeline, AutoencoderKL
import gc,cv2,os
from PIL import Image
import requests
from io import BytesIO
from IPython.display import display
import matplotlib.pyplot as plt
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLPipeline.from_pretrained(
"Andyrasika/protvisionXL", torch_dtype=torch.float16, variant="fp16",vae=vae
)
pipe.enable_xformers_memory_efficient_attention()
pipe.to("cuda")
prompt = '8k intricate, highly detailed, digital photography, best quality, masterpiece, a (full body "shot) photo of A warrior man that lived with dragons his whole life is now leading them to battle. torn clothes exposing parts of her body, scratch marks, epic, hyperrealistic, hyperrealism, 8k, cinematic lighting, greg rutkowski, wlop'
negative_prompt='(deformed iris, deformed pupils), text, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, (extra fingers), (mutated hands), poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, (fused fingers), (too many fingers), long neck, camera'
image = pipe(prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=9.0,
num_inference_steps=50).images[0]
gc.collect()
torch.cuda.empty_cache()
```

 | 2,742 | [
[
-0.040557861328125,
-0.043609619140625,
0.031890869140625,
0.007526397705078125,
-0.0311431884765625,
0.0002732276916503906,
0.01470184326171875,
-0.01488494873046875,
0.027557373046875,
0.040740966796875,
-0.044097900390625,
-0.041015625,
-0.05291748046875,
... |
NbAiLab/nb-whisper-small-beta | 2023-07-23T19:50:20.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"arxiv:1910.0... | automatic-speech-recognition | NbAiLab | null | null | NbAiLab/nb-whisper-small-beta | 8 | 517 | transformers | 2023-06-28T08:56:39 | ---
license: cc-by-4.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Small (beta)
This is a **_public beta_** of the Norwegian NB-Whisper Small model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
<center>
<figure>
<video controls>
<source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a></figcaption>
</figure>
</center>
## Model Details
NB-Whisper models will be available in five different sizes:
| Model Size | Parameters | Availability |
|------------|------------|--------------|
| tiny | 39M | [NB-Whisper Tiny (beta)](https://huggingface.co/NbAiLab/nb-whisper-tiny-beta) |
| base | 74M | [NB-Whisper Base (beta)](https://huggingface.co/NbAiLab/nb-whisper-base-beta) |
| small | 244M | [NB-Whisper Small (beta)](https://huggingface.co/NbAiLab/nb-whisper-small-beta) |
| medium | 769M | [NB-Whisper Medium (beta)](https://huggingface.co/NbAiLab/nb-whisper-medium-beta) |
| large | 1550M | [NB-Whisper Large (beta)](https://huggingface.co/NbAiLab/nb-whisper-large-beta) |
An official release of NB-Whisper models is planned for the Fall 2023.
Please refer to the OpenAI Whisper model card for more details about the backbone model.
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** http://ai.nb.no/demo/nb-whisper
## Uses
### Direct Use
This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
### Downstream Use
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
## Bias, Risks, and Limitations
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
asr = pipeline(
"automatic-speech-recognition",
"NbAiLab/nb-whisper-small-beta"
)
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'}
)
# {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
```
Timestamps can also be retrieved by passing in the right parameter.
```python
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'},
return_timestamps=True,
)
# {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
# r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
# 'chunks': [{'timestamp': (0.0, 5.34),
# 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
# {'timestamp': (5.34, 8.64),
# 'text': ' hva valget dem gjør at vi skal gjøre.'},
# {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
# {'timestamp': (10.64, 17.44),
# 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
# {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
# {'timestamp': (19.44, 23.94),
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
```
## Training Data
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
- NST Norwegian ASR Database (16 kHz), and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** TPUv4
- **Hours used:** 1,536
- **Cloud Provider:** Google Cloud
- **Compute Region:** `us-central1`
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
#### Software
The model is trained using Jax/Flax. The final model is converted to Pytorch, Tensorflow, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
## Citation & Contributors
The development of this model was part of the contributors' professional roles at the National Library of Norway, under the _NoSTram_ project led by _Per Egil Kummervold (PEK)_. The Jax code, dataset loaders, and training scripts were collectively designed by _Javier de la Rosa (JdlR)_, _Freddy Wetjen (FW)_, _Rolv-Arild Braaten (RAB)_, and _PEK_. Primary dataset curation was handled by _FW_, _RAB_, and _PEK_, while _JdlR_ and _PEK_ crafted the documentation. The project was completed under the umbrella of AiLab, directed by _Svein Arne Brygfjeld_.
All contributors played a part in shaping the optimal training strategy for the Norwegian ASR model based on the Whisper architecture.
_A paper detailing our process and findings is underway!_
## Acknowledgements
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
## Contact
We are releasing this ASR Whisper model as a public beta to gather constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
If you intend to incorporate this model into your research, we kindly request that you reach out to us. We can provide you with the most current status of our upcoming paper, which you can cite to acknowledge and provide context for the work done on this model.
Please use this email as the main contact point, it is read by the entire team: <a rel="noopener nofollow" href="mailto:ailab@nb.no">ailab@nb.no</a> | 10,474 | [
[
-0.023681640625,
-0.04095458984375,
0.015533447265625,
0.0256195068359375,
-0.0237884521484375,
-0.01422882080078125,
-0.019683837890625,
-0.04541015625,
0.02557373046875,
0.040252685546875,
-0.04803466796875,
-0.042877197265625,
-0.04327392578125,
0.0074729... |
okho0653/distilbert-base-uncased-zero-shot-sentiment-model | 2022-04-22T01:33:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | okho0653 | null | null | okho0653/distilbert-base-uncased-zero-shot-sentiment-model | 0 | 516 | transformers | 2022-04-22T01:28:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-zero-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-zero-shot-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,081 | [
[
-0.0333251953125,
-0.055206298828125,
0.0276031494140625,
0.0226287841796875,
-0.03875732421875,
-0.01239013671875,
-0.006282806396484375,
0.0023784637451171875,
0.017578125,
0.0204010009765625,
-0.049163818359375,
-0.06005859375,
-0.063232421875,
-0.0069503... |
keremberke/yolov5s-forklift | 2023-01-01T20:54:02.000Z | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/forklift-object-detection",
"model-index",
"has_space",
"region:us"
] | object-detection | keremberke | null | null | keremberke/yolov5s-forklift | 1 | 516 | yolov5 | 2023-01-01T11:18:11 |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov5s-forklift
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: keremberke/forklift-object-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.8382598267226307 # min: 0.0 - max: 1.0
name: mAP@0.5
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-forklift" src="https://huggingface.co/keremberke/yolov5s-forklift/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-forklift')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-forklift --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
| 2,047 | [
[
-0.05682373046875,
-0.03350830078125,
0.034576416015625,
-0.034576416015625,
-0.026947021484375,
-0.031707763671875,
0.0167236328125,
-0.043487548828125,
0.009368896484375,
0.02264404296875,
-0.0572509765625,
-0.051971435546875,
-0.036712646484375,
-0.005321... |
timm/regnetx_064.pycls_in1k | 2023-03-21T06:32:48.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:mit",
"region:us"
] | image-classification | timm | null | null | timm/regnetx_064.pycls_in1k | 0 | 516 | timm | 2023-03-21T06:32:36 | ---
tags:
- image-classification
- timm
library_tag: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for regnetx_064.pycls_in1k
A RegNetX-6.4GF image classification model. Pretrained on ImageNet-1k by paper authors.
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.2
- GMACs: 6.5
- Activations (M): 16.4
- Image size: 224 x 224
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/pycls
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnetx_064.pycls_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetx_064.pycls_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 168, 56, 56])
# torch.Size([1, 392, 28, 28])
# torch.Size([1, 784, 14, 14])
# torch.Size([1, 1624, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetx_064.pycls_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1624, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
| 15,496 | [
[
-0.059417724609375,
-0.01546478271484375,
-0.01226806640625,
0.036773681640625,
-0.03240966796875,
-0.00807952880859375,
-0.01297760009765625,
-0.038299560546875,
0.0751953125,
0.006572723388671875,
-0.051605224609375,
-0.0382080078125,
-0.047576904296875,
0... |
Tereveni-AI/gpt2-124M-uk-fiction | 2022-12-15T13:49:41.000Z | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"uk",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Tereveni-AI | null | null | Tereveni-AI/gpt2-124M-uk-fiction | 2 | 515 | transformers | 2022-03-02T23:29:05 | ---
language: uk
widget:
- text: "Но зла Юнона, суча дочка, "
tags:
- text-generation
---
Note: **default code snippet above won't work** because we are using `AlbertTokenizer` with `GPT2LMHeadModel`, see [issue](https://github.com/huggingface/transformers/issues/4285).
## GPT2 124M Trained on Ukranian Fiction
### Training details
Model was trained on corpus of 4040 fiction books, 2.77 GiB in total.
Evaluation on [brown-uk](https://github.com/brown-uk/corpus) gives perplexity of 50.16.
### Example usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
model = GPT2LMHeadModel.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
input_ids = tokenizer.encode("Но зла Юнона, суча дочка,", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
```
Prints something like this:
```bash
0: Но зла Юнона, суча дочка, яка затьмарила всі її таємниці: І хто з'їсть її душу, той помре». І, не дочекавшись гніву богів, посунула в пітьму, щоб не бачити перед собою. Але, за
1: Но зла Юнона, суча дочка, і довела мене до божевілля. Але він не знав нічого. Після того як я його побачив, мені стало зле. Я втратив рівновагу. Але в мене не було часу на роздуми. Я вже втратив надію
2: Но зла Юнона, суча дочка, не нарікала нам! — раптом вигукнула Юнона. — Це ти, старий йолопе! — мовила вона, не перестаючи сміятись. — Хіба ти не знаєш, що мені подобається ходити з тобою?
``` | 1,664 | [
[
-0.03118896484375,
-0.02569580078125,
0.0172119140625,
0.00717926025390625,
-0.02490234375,
-0.0010356903076171875,
-0.015350341796875,
-0.00482940673828125,
0.01438140869140625,
0.01390838623046875,
-0.043060302734375,
-0.048858642578125,
-0.05419921875,
0.... |
facebook/s2t-medium-librispeech-asr | 2023-09-07T15:42:27.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | facebook | null | null | facebook/s2t-medium-librispeech-asr | 7 | 515 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
license: mit
---
# S2T-MEDIUM-LIBRISPEECH-ASR
`s2t-medium-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr")
processor = Speech2Textprocessor.from_pretrained("facebook/s2t-medium-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
input_features = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from evaluate import load
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 3.5 | 7.8 |
## Training data
The S2T-MEDIUM-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` | 5,226 | [
[
-0.006500244140625,
-0.054779052734375,
0.00725555419921875,
0.01493072509765625,
-0.0150604248046875,
-0.0120086669921875,
-0.037567138671875,
-0.029754638671875,
0.0036602020263671875,
0.030731201171875,
-0.04901123046875,
-0.02532958984375,
-0.05035400390625,... |
Helsinki-NLP/opus-mt-tc-big-en-es | 2023-10-10T10:27:47.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"es",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-tc-big-en-es | 3 | 515 | transformers | 2022-04-13T15:04:03 | ---
language:
- en
- es
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-es
results:
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: flores101-devtest
type: flores_101
args: eng spa devtest
metrics:
- name: BLEU
type: bleu
value: 28.5
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: news-test2008
type: news-test2008
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 30.1
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 57.2
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: tico19-test
type: tico19-test
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 53.0
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 30.2
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2010
type: wmt-2010-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 37.6
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2011
type: wmt-2011-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 38.9
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2012
type: wmt-2012-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 39.5
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 35.9
---
# opus-mt-tc-big-en-es
Neural machine translation model for translating from English (en) to Spanish (es).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): spa
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"A wasp stung him and he had an allergic reaction.",
"I love nature."
]
model_name = "pytorch-models/opus-mt-tc-big-en-es"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Una avispa lo picó y tuvo una reacción alérgica.
# Me encanta la naturaleza.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-es")
print(pipe("A wasp stung him and he had an allergic reaction."))
# expected output: Una avispa lo picó y tuvo una reacción alérgica.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-spa | tatoeba-test-v2021-08-07 | 0.73863 | 57.2 | 16583 | 134710 |
| eng-spa | flores101-devtest | 0.56440 | 28.5 | 1012 | 29199 |
| eng-spa | newssyscomb2009 | 0.58415 | 31.5 | 502 | 12503 |
| eng-spa | news-test2008 | 0.56707 | 30.1 | 2051 | 52586 |
| eng-spa | newstest2009 | 0.57836 | 30.2 | 2525 | 68111 |
| eng-spa | newstest2010 | 0.62357 | 37.6 | 2489 | 65480 |
| eng-spa | newstest2011 | 0.62415 | 38.9 | 3003 | 79476 |
| eng-spa | newstest2012 | 0.63031 | 39.5 | 3003 | 79006 |
| eng-spa | newstest2013 | 0.60354 | 35.9 | 3000 | 70528 |
| eng-spa | tico19-test | 0.73554 | 53.0 | 2100 | 66563 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:03:53 EEST 2022
* port machine: LM0-400-22516.local
| 8,063 | [
[
-0.0285797119140625,
-0.0443115234375,
0.0174102783203125,
0.0230560302734375,
-0.031982421875,
-0.016693115234375,
-0.03741455078125,
-0.026153564453125,
0.014892578125,
0.02447509765625,
-0.0340576171875,
-0.052978515625,
-0.045257568359375,
0.024688720703... |
TurkuNLP/bert-base-finnish-cased-squad2 | 2023-01-30T13:56:38.000Z | [
"transformers",
"pytorch",
"bert",
"question-answering",
"fi",
"dataset:squad_v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | TurkuNLP | null | null | TurkuNLP/bert-base-finnish-cased-squad2 | 1 | 515 | transformers | 2023-01-30T11:23:17 | ---
datasets:
- squad_v2
language:
- fi
metrics:
- accuracy
- f1
---
# bert-base-finnish-cased-v1 for QA
This is the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1/tree/main) model,
fine-tuned using the [Finnish SQuAD2.0](https://huggingface.co/datasets/TurkuNLP/squad_v2_fi) dataset. It's been trained on
question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** bert-base-finnish-v1
**Language:** Finnish
**Downstream-task:** Extractive QA
**Training data:** squad_v2_fi
**Eval data:** squad_v2_fi
## Hyperparameters
```
batch_size = 12
n_epochs = 2
base_LM_model = "bert-base-finnish-cased-v1"
max_seq_len = 384
learning_rate = 3e-5
doc_stride=128
```
## Performance
```
"exact": 68.2424,
"f1": 73.6607,
"HasAns_exact": 58.3290,
"HasAns_f1": 69.3629,
"HasAns_total": 5817,
"NoAns_exact": 77.8073,
"NoAns_f1": 77.8073,
"NoAns_total": 6029,
"samples": 12138,
"total": 11846
```
## Comparison with other Finnish models
| model | F1 |
| ------------------------------------------ | ----: |
| TurkuNLP/bert-base-finnish-cased-squad2 | 73.66 |
| ilmariky/bert-base-finnish-cased-squad2-fi | 61.87 |
| 1,268 | [
[
-0.0245513916015625,
-0.05792236328125,
0.0288543701171875,
0.0201568603515625,
-0.03094482421875,
-0.007373809814453125,
0.004486083984375,
-0.0114288330078125,
0.02435302734375,
0.03424072265625,
-0.0650634765625,
-0.0389404296875,
-0.033203125,
-0.0031223... |
timm/ese_vovnet39b.ra_in1k | 2023-04-21T23:12:23.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1904.09730",
"arxiv:1911.06667",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/ese_vovnet39b.ra_in1k | 0 | 515 | timm | 2023-04-21T23:12:01 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for ese_vovnet39b.ra_in1k
A VoVNet-v2 image classification model. Pretrained on ImageNet-1k in `timm` by Ross Wightman using RandAugment `RA` recipe. Related to `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 24.6
- GMACs: 7.1
- Activations (M): 6.7
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- An Energy and GPU-Computation Efficient Backbone Network: https://arxiv.org/abs/1904.09730
- CenterMask : Real-Time Anchor-Free Instance Segmentation: https://arxiv.org/abs/1911.06667
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ese_vovnet39b.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ese_vovnet39b.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ese_vovnet39b.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{lee2019energy,
title = {An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection},
author = {Lee, Youngwan and Hwang, Joong-won and Lee, Sangrok and Bae, Yuseok and Park, Jongyoul},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019}
}
```
```bibtex
@article{lee2019centermask,
title={CenterMask: Real-Time Anchor-Free Instance Segmentation},
author={Lee, Youngwan and Park, Jongyoul},
booktitle={CVPR},
year={2020}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
| 4,389 | [
[
-0.037017822265625,
-0.03851318359375,
0.004222869873046875,
0.002773284912109375,
-0.0287322998046875,
-0.0269317626953125,
-0.01322174072265625,
-0.033111572265625,
0.017974853515625,
0.03900146484375,
-0.036468505859375,
-0.0517578125,
-0.04547119140625,
... |
timm/caformer_b36.sail_in22k_ft_in1k | 2023-05-05T05:40:48.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | timm | null | null | timm/caformer_b36.sail_in22k_ft_in1k | 0 | 515 | timm | 2023-05-05T05:39:19 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for caformer_b36.sail_in22k_ft_in1k
A CAFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 98.8
- GMACs: 23.2
- Activations (M): 67.3
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_b36.sail_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_b36.sail_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_b36.sail_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
| 3,754 | [
[
-0.04046630859375,
-0.0290679931640625,
0.00699615478515625,
0.01200103759765625,
-0.029388427734375,
-0.02447509765625,
-0.01678466796875,
-0.03167724609375,
0.0165252685546875,
0.036376953125,
-0.04052734375,
-0.05755615234375,
-0.055999755859375,
-0.00835... |
digiplay/NextPhoto_v1 | 2023-10-01T22:50:43.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/NextPhoto_v1 | 1 | 515 | diffusers | 2023-07-18T08:21:41 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/84335?modelVersionId=89650
Original Author's DEMO images :




| 755 | [
[
-0.03985595703125,
-0.01486968994140625,
0.02923583984375,
0.021942138671875,
-0.031890869140625,
-0.0075531005859375,
0.01055145263671875,
-0.00647735595703125,
0.04095458984375,
0.0216522216796875,
-0.0589599609375,
-0.0257415771484375,
-0.01727294921875,
... |
digiplay/CyberRealistic_Classic_v1.5 | 2023-07-22T06:54:39.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/CyberRealistic_Classic_v1.5 | 1 | 515 | diffusers | 2023-07-21T05:10:08 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/71185/cyberrealistic-classic
Original Author's DEMO images :



| 548 | [
[
-0.0386962890625,
-0.04315185546875,
0.040313720703125,
0.01267242431640625,
-0.0283355712890625,
0.006618499755859375,
-0.002017974853515625,
-0.0214691162109375,
0.050872802734375,
0.0418701171875,
-0.060577392578125,
-0.03399658203125,
-0.00958251953125,
... |
jondurbin/airoboros-m-7b-3.1.2 | 2023-11-01T08:46:24.000Z | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:jondurbin/airoboros-3.1",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | jondurbin | null | null | jondurbin/airoboros-m-7b-3.1.2 | 21 | 515 | transformers | 2023-10-19T16:21:12 | ---
license: apache-2.0
datasets:
- jondurbin/airoboros-3.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is just one last release tweaking hyperparams, which seem to result in a higher quality model than 3.1 or 3.1.1 (and doens't have the prompt issue of 3.1).
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| 24,617 | [
[
-0.0303802490234375,
-0.0821533203125,
0.03271484375,
0.00540924072265625,
-0.00372314453125,
-0.0187530517578125,
-0.0175018310546875,
-0.0172119140625,
0.0357666015625,
0.033721923828125,
-0.059112548828125,
-0.044189453125,
-0.031585693359375,
0.008064270... |
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | 2021-10-17T12:09:14.000Z | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | CAMeL-Lab | null | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | 4 | 514 | transformers | 2022-03-02T23:29:04 | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,348 | [
[
-0.05377197265625,
-0.045745849609375,
-0.007526397705078125,
0.030670166015625,
-0.0287322998046875,
0.0026378631591796875,
-0.02386474609375,
-0.038482666015625,
0.01309967041015625,
0.03363037109375,
-0.045562744140625,
-0.042724609375,
-0.065185546875,
0... |
Helsinki-NLP/opus-mt-swc-en | 2023-08-16T12:06:34.000Z | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Helsinki-NLP | null | null | Helsinki-NLP/opus-mt-swc-en | 0 | 514 | transformers | 2022-03-02T23:29:04 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-swc-en
* source languages: swc
* target languages: en
* OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.en | 41.1 | 0.569 |
| 824 | [
[
-0.020233154296875,
-0.0289764404296875,
0.018218994140625,
0.0290985107421875,
-0.0261993408203125,
-0.0250701904296875,
-0.0333251953125,
-0.009490966796875,
-0.005645751953125,
0.033721923828125,
-0.06103515625,
-0.049652099609375,
-0.0418701171875,
0.017... |
nyu-mll/roberta-base-10M-1 | 2021-05-20T18:57:10.000Z | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | nyu-mll | null | null | nyu-mll/roberta-base-10M-1 | 1 | 514 | transformers | 2022-03-02T23:29:05 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
| 3,820 | [
[
-0.0379638671875,
-0.0276031494140625,
0.0240325927734375,
0.019622802734375,
-0.017364501953125,
-0.02099609375,
-0.0199737548828125,
-0.0293731689453125,
0.02569580078125,
0.0190582275390625,
-0.06427001953125,
-0.050994873046875,
-0.055389404296875,
0.017... |
digiplay/highQualityCGMIX_v1 | 2023-10-02T21:39:47.000Z | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | digiplay | null | null | digiplay/highQualityCGMIX_v1 | 1 | 514 | diffusers | 2023-07-05T03:31:15 | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/22294?modelVersionId=26620
***Note***: use "CG" keywords in your prompt to get amazing result. :)

Original Author's DEMO images :


| 664 | [
[
-0.04656982421875,
-0.05047607421875,
0.038238525390625,
0.0301361083984375,
-0.0458984375,
0.00563812255859375,
-0.009185791015625,
-0.0135345458984375,
0.047607421875,
0.0163726806640625,
-0.06719970703125,
-0.05841064453125,
-0.0299072265625,
-0.002439498... |
matgu23/cntblv | 2023-07-14T04:27:14.000Z | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | matgu23 | null | null | matgu23/cntblv | 0 | 514 | diffusers | 2023-07-14T04:23:23 | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### cntblv Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 495 | [
[
-0.0227203369140625,
-0.058074951171875,
0.0333251953125,
0.039947509765625,
-0.032257080078125,
0.041412353515625,
0.0093536376953125,
-0.0206451416015625,
0.036651611328125,
0.00945281982421875,
-0.0219879150390625,
-0.0293731689453125,
-0.033599853515625,
... |
keremberke/yolov5s-valorant | 2022-12-30T20:49:49.000Z | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/valorant-object-detection",
"model-index",
"has_space",
"region:us"
] | object-detection | keremberke | null | null | keremberke/yolov5s-valorant | 1 | 513 | yolov5 | 2022-12-28T09:47:44 |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/valorant-object-detection
model-index:
- name: keremberke/yolov5s-valorant
results:
- task:
type: object-detection
dataset:
type: keremberke/valorant-object-detection
name: keremberke/valorant-object-detection
split: validation
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.9818427788145484 # min: 0.0 - max: 1.0
name: mAP@0.5
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-valorant" src="https://huggingface.co/keremberke/yolov5s-valorant/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-valorant')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-valorant --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** | 2,042 | [
[
-0.05035400390625,
-0.03729248046875,
0.035888671875,
-0.025238037109375,
-0.0225982666015625,
-0.0257568359375,
0.00557708740234375,
-0.03271484375,
0.0175933837890625,
0.0304412841796875,
-0.047027587890625,
-0.0579833984375,
-0.04034423828125,
-0.00859069... |
severinsimmler/xlm-roberta-longformer-base-16384 | 2023-07-10T22:15:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"feature-extraction",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"g... | feature-extraction | severinsimmler | null | null | severinsimmler/xlm-roberta-longformer-base-16384 | 9 | 513 | transformers | 2023-04-20T15:41:44 | ---
model-index:
- name: xlm-roberta-longformer-base-16384
results: []
license: mit
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# xlm-roberta-longformer-base-16384
⚠️ This is just the PyTorch version of [`hyperonym/xlm-roberta-longformer-base-16384`](https://huggingface.co/hyperonym/xlm-roberta-longformer-base-16384) without any modifications.
**xlm-roberta-longformer** is a multilingual [Longformer](https://arxiv.org/abs/2004.05150) initialized with [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base)'s weights without further pretraining. It is intended to be fine-tuned on a downstream task.
The notebook for replicating the model is available on GitHub: https://github.com/hyperonym/dirge/blob/master/models/xlm-roberta-longformer/convert.ipynb
| 1,214 | [
[
-0.0048828125,
-0.042236328125,
0.0240478515625,
0.034759521484375,
-0.022552490234375,
-0.00469207763671875,
-0.036346435546875,
-0.034759521484375,
0.032379150390625,
0.04681396484375,
-0.042022705078125,
-0.030487060546875,
-0.054046630859375,
0.016067504... |
KBLab/robust-swedish-sentiment-multiclass | 2023-06-16T10:14:05.000Z | [
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"text-classification",
"sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | KBLab | null | null | KBLab/robust-swedish-sentiment-multiclass | 3 | 513 | transformers | 2023-05-08T09:28:15 | ---
license: apache-2.0
language:
- sv
---
The National Library of Sweden/KBLab releases a robust, multi-label sentiment classifier finetuned on [Megatron-BERT-large-165K](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165k). The model was trained on approximately 75K Swedish texts from multiple linguistic domains and datasets. More information on the datasets will be published on [the KBLab blog](https://kb-labb.github.io/).
The model has a binary sister version trained on the same dataset. The binary model is to be uploaded soon. | 555 | [
[
-0.0572509765625,
-0.011077880859375,
0.027008056640625,
0.026275634765625,
-0.026580810546875,
0.005733489990234375,
-0.01100921630859375,
-0.032470703125,
0.019775390625,
0.0308990478515625,
-0.0269927978515625,
-0.051177978515625,
-0.046905517578125,
0.00... |
facebook/mms-tts-kor | 2023-09-01T10:39:38.000Z | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | facebook | null | null | facebook/mms-tts-kor | 1 | 513 | transformers | 2023-09-01T10:39:22 |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Korean Text-to-Speech
This repository contains the **Korean (kor)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-kor")
text = "some example text in the Korean language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
Note: For this checkpoint, the input text must be converted to the Latin alphabet first using the [uroman](https://github.com/isi-nlp/uroman) tool.
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
| 4,116 | [
[
-0.019622802734375,
-0.055908203125,
0.02020263671875,
0.037841796875,
-0.00835418701171875,
-0.002025604248046875,
-0.0230255126953125,
-0.018280029296875,
0.02520751953125,
0.0229339599609375,
-0.055206298828125,
-0.0421142578125,
-0.0404052734375,
0.00500... |
mshenoda/roberta-spam | 2023-08-23T23:48:49.000Z | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:mshenoda/spam-messages",
"arxiv:1907.11692",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | mshenoda | null | null | mshenoda/roberta-spam | 2 | 512 | transformers | 2023-06-04T02:30:20 | ---
license: mit
datasets:
- mshenoda/spam-messages
pipeline_tag: text-classification
widget:
- text: >-
U have a secret admirer. REVEAL who thinks U R So special. Call 09065174042.
To opt out Reply REVEAL STOP. 1.50 per msg recd.
example_title: spam example 1
- text: >-
Hey so this sat are we going for the intro pilates only? Or the kickboxing
too?
example_title: ham example 1
- text: >-
Great News! Call FREEFONE 08006344447 to claim your guaranteed $1000 CASH or
$2000 gift. Speak to a live operator NOW!
example_title: spam example 2
- text: Dude im no longer a pisces. Im an aquarius now.
example_title: ham example 2
language:
- en
---
# RoBERTa based Spam Message Detection
Spam messages frequently carry malicious links or phishing attempts posing significant threats to both organizations and their users. By choosing our RoBERTa-based spam message detection system, organizations can greatly enhance their security infrastructure. Our system effectively detects and filters out spam messages, adding an extra layer of security that safeguards organizations against potential financial losses, legal consequences, and reputational harm.
## Model Output
- 0 is ham
- 1 is spam
## Dataset
https://huggingface.co/datasets/mshenoda/spam-messages
The dataset is composed of messages labeled by ham or spam, merged from three data sources:
1. SMS Spam Collection https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset
2. Telegram Spam Ham https://huggingface.co/datasets/thehamkercat/telegram-spam-ham/tree/main
3. Enron Spam: https://huggingface.co/datasets/SetFit/enron_spam/tree/main (only used message column and labels)
The prepare script for enron is available at https://github.com/mshenoda/roberta-spam/tree/main/data/enron.
The data is split 80% train 10% validation, and 10% test sets; the scripts used to split and merge of the three data sources are available at: https://github.com/mshenoda/roberta-spam/tree/main/data/utils.
## Architecture
The model is fine tuned RoBERTa
roberta-base: https://huggingface.co/roberta-base
paper: https://arxiv.org/abs/1907.11692
## Code
https://github.com/mshenoda/roberta-spam | 2,189 | [
[
-0.0205841064453125,
-0.0755615234375,
0.010101318359375,
0.035064697265625,
-0.0268096923828125,
-0.01070404052734375,
-0.005527496337890625,
-0.0467529296875,
0.01296234130859375,
0.0634765625,
-0.045654296875,
-0.059783935546875,
-0.054351806640625,
0.016... |
sharpbai/Llama-2-7b-hf | 2023-07-27T03:22:50.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"text-generation-inference",
"region:us"
] | text-generation | sharpbai | null | null | sharpbai/Llama-2-7b-hf | 1 | 512 | transformers | 2023-07-19T09:58:14 | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Llama-2-7b-hf
*The weight file is split into chunks with a size of 405MB for convenient and fast parallel downloads*
A 405MB split weight version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
The original model card is down below
-----------------------------------------
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| | 10,570 | [
[
-0.0161285400390625,
-0.051605224609375,
0.028472900390625,
0.0178680419921875,
-0.030426025390625,
0.0171356201171875,
-0.0032329559326171875,
-0.056671142578125,
0.0077667236328125,
0.0214080810546875,
-0.051483154296875,
-0.0406494140625,
-0.05279541015625,
... |
alexandrainst/da-hatespeech-detection-base | 2023-09-20T11:56:09.000Z | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | alexandrainst | null | null | alexandrainst/da-hatespeech-detection-base | 1 | 511 | transformers | 2022-03-02T23:29:04 | ---
language:
- da
license: apache-2.0
widget:
- text: Senile gamle idiot
---
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | 984 | [
[
-0.036712646484375,
-0.05792236328125,
0.0183258056640625,
0.01535797119140625,
-0.025909423828125,
0.00482177734375,
-0.02313232421875,
-0.0333251953125,
0.01047515869140625,
0.0225677490234375,
-0.0267181396484375,
-0.04620361328125,
-0.065673828125,
0.016... |
avichr/Legal-heBERT | 2022-07-07T07:31:39.000Z | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:1911.03090",
"arxiv:2010.02559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | avichr | null | null | avichr/Legal-heBERT | 1 | 511 | transformers | 2022-04-19T11:53:43 | # Legal-HeBERT
Legal-HeBERT is a BERT model for Hebrew legal and legislative domains. It is intended to improve the legal NLP research and tools development in Hebrew. We release two versions of Legal-HeBERT. The first version is a fine-tuned model of [HeBERT](https://github.com/avichaychriqui/HeBERT) applied on legal and legislative documents. The second version uses [HeBERT](https://github.com/avichaychriqui/HeBERT)'s architecture guidlines to train a BERT model from scratch. <br>
We continue collecting legal data, examining different architectural designs, and performing tagged datasets and legal tasks for evaluating and to development of a Hebrew legal tools.
## Training Data
Our training datasets are:
| Name | Hebrew Description | Size (GB) | Documents | Sentences | Words | Notes |
|----------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------- |----------- |----------- |------------ |------------- |----------------------------------------- |
| The Israeli Law Book | ספר החוקים הישראלי | 0.05 | 2338 | 293352 | 4851063 | |
| Judgments of the Supreme Court | מאגר פסקי הדין של בית המשפט העליון | 0.7 | 212348 | 5790138 | 79672415 | |
| custody courts | החלטות בתי הדין למשמורת | 2.46 | 169,708 | 8,555,893 | 213,050,492 | |
| Law memoranda, drafts of secondary legislation and drafts of support tests that have been distributed to the public for comment | תזכירי חוק, טיוטות חקיקת משנה וטיוטות מבחני תמיכה שהופצו להערות הציבור | 0.4 | 3,291 | 294,752 | 7,218,960 | |
| Supervisors of Land Registration judgments | מאגר פסקי דין של המפקחים על רישום המקרקעין | 0.02 | 559 | 67,639 | 1,785,446 | |
| Decisions of the Labor Court - Corona | מאגר החלטות בית הדין לעניין שירות התעסוקה – קורונה | 0.001 | 146 | 3505 | 60195 | |
| Decisions of the Israel Lands Council | החלטות מועצת מקרקעי ישראל | | 118 | 11283 | 162692 | aggregate file |
| Judgments of the Disciplinary Tribunal and the Israel Police Appeals Tribunal | פסקי דין של בית הדין למשמעת ובית הדין לערעורים של משטרת ישראל | 0.02 | 54 | 83724 | 1743419 | aggregate files |
| Disciplinary Appeals Committee in the Ministry of Health | ועדת ערר לדין משמעתי במשרד הבריאות | 0.004 | 252 | 21010 | 429807 | 465 files are scanned and didn't parser |
| Attorney General's Positions | מאגר התייצבויות היועץ המשפטי לממשלה | 0.008 | 281 | 32724 | 813877 | |
| Legal-Opinion of the Attorney General | מאגר חוות דעת היועץ המשפטי לממשלה | 0.002 | 44 | 7132 | 188053 | |
| | | | | | | |
| total | | 3.665 | 389,139 | 15,161,152 | 309,976,419 | |
We thank <b>Yair Gardin</b> for the referring to the governance data, <b>Elhanan Schwarts</b> for collecting and parsing The Israeli law book, and <b>Jonathan Schler</b> for collecting the judgments of the supreme court.
## Training process
* Vocabulary size: 50,000 tokens
* 4 epochs (1M steps±)
* lr=5e-5
* mlm_probability=0.15
* batch size = 32 (for each gpu)
* NVIDIA GeForce RTX 2080 TI + NVIDIA GeForce RTX 3090 (1 week training)
### Additional training settings:
<b>Fine-tuned [HeBERT](https://github.com/avichaychriqui/HeBERT) model:</b> The first eight layers were freezed (like [Lee et al. (2019)](https://arxiv.org/abs/1911.03090) suggest)<br>
<b>Legal-HeBERT trained from scratch:</b> The training process is similar to [HeBERT](https://github.com/avichaychriqui/HeBERT) and inspired by [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559) <br>
## How to use
The models can be found in huggingface hub and can be fine-tunned to any down-stream task:
```
# !pip install transformers==4.14.1
from transformers import AutoTokenizer, AutoModel
model_name = 'avichr/Legal-heBERT_ft' # for the fine-tuned HeBERT model
model_name = 'avichr/Legal-heBERT' # for legal HeBERT model trained from scratch
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model_name,
)
fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.")
```
## Stay tuned!
We are still working on our models and the datasets. We will edit this page as we progress. We are open for collaborations.
## If you used this model please cite us as :
Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai, Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts (June 27, 2022). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4147127
```
@article{chriqui2021hebert,
title={Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts},
author={Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai},
journal={SSRN preprint:4147127},
year={2022}
}
```
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il), The Coller AI Lab <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il), The Coller AI Lab <br>
[Ittai Bar-Siman-Tov](mailto:Ittai.Bar-Siman-Tov@biu.ac.il), the BIU Innovation Lab for Law, Data-Science and Digital Ethics <br>
Thank you, תודה, شكرا <br>
| 7,821 | [
[
-0.02752685546875,
-0.02911376953125,
0.023193359375,
0.01450347900390625,
-0.03729248046875,
-0.01374053955078125,
-0.0014944076538085938,
-0.0218353271484375,
0.020172119140625,
0.034698486328125,
-0.0183868408203125,
-0.060821533203125,
-0.06500244140625,
... |
emre/turkish-sentiment-analysis | 2023-03-18T20:31:56.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"tr",
"dataset:emre/autotrain-data-turkish-sentiment-analysis",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | emre | null | null | emre/turkish-sentiment-analysis | 4 | 511 | transformers | 2022-05-15T20:05:07 | ---
tags: autotrain
language: tr
widget:
- text: "Bu ürün gerçekten güzel çıktı"
datasets:
- emre/autotrain-data-turkish-sentiment-analysis
co2_eq_emissions: 120.82460124309924
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 870727732
- CO2 Emissions (in grams): 120.82460124309924
## Validation Metrics
- Loss: 0.1098366305232048
- Accuracy: 0.9697853317600073
- Macro F1: 0.9482820974460786
- Micro F1: 0.9697853317600073
- Weighted F1: 0.9695237873890088
- Macro Precision: 0.9540948884759232
- Micro Precision: 0.9697853317600073
- Weighted Precision: 0.9694186941924757
- Macro Recall: 0.9428467518468838
- Micro Recall: 0.9697853317600073
- Weighted Recall: 0.9697853317600073
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Bu ürün gerçekten güzel çıktı"}' https://api-inference.huggingface.co/models/emre/turkish-sentiment-analysis
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
inputs = tokenizer("Bu ürün gerçekten güzel çıktı", return_tensors="pt")
outputs = model(**inputs)
``` | 1,421 | [
[
-0.039886474609375,
-0.034423828125,
0.01366424560546875,
0.010986328125,
-0.006793975830078125,
0.0045318603515625,
-0.008819580078125,
-0.01384735107421875,
-0.0039520263671875,
0.004779815673828125,
-0.04302978515625,
-0.0477294921875,
-0.0625,
0.00451278... |
snorkelai/sdnet | 2023-08-30T00:27:54.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"arxiv:2203.12252",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | snorkelai | null | null | snorkelai/sdnet | 0 | 511 | transformers | 2022-10-19T18:58:55 | ---
license: cc-by-4.0
---
This model is an adaptation of Self-describing Networks introduced by _Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, Le Sun_ in _Few-shot Named Entity Recognition with Self-describing Networks 2022_.
https://arxiv.org/pdf/2203.12252.pdf | 266 | [
[
-0.013671875,
-0.01934814453125,
0.0305023193359375,
0.01776123046875,
0.001262664794921875,
-0.046417236328125,
0.0299835205078125,
-0.049957275390625,
0.0244293212890625,
0.06732177734375,
-0.05035400390625,
0.0031280517578125,
-0.0275115966796875,
0.01902... |
jinhybr/OCR-DocVQA-Donut | 2022-11-04T22:23:22.000Z | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"donut",
"image-to-text",
"vision",
"document-question-answering",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | document-question-answering | jinhybr | null | null | jinhybr/OCR-DocVQA-Donut | 1 | 511 | transformers | 2022-11-04T22:11:29 | ---
license: mit
pipeline_tag: document-question-answering
tags:
- donut
- image-to-text
- vision
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# Donut (base-sized model, fine-tuned on DocVQA)
Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

## Intended uses & limitations
This model is fine-tuned on DocVQA, a document visual question answering dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. | 1,567 | [
[
-0.02191162109375,
-0.054962158203125,
0.0265350341796875,
-0.0199127197265625,
-0.0113372802734375,
-0.005401611328125,
-0.0001728534698486328,
-0.0218048095703125,
0.0116119384765625,
0.052337646484375,
-0.047393798828125,
-0.017669677734375,
-0.05807495117187... |
pierreguillou/layout-xlm-base-finetuned-with-DocLayNet-base-at-linelevel-ml384 | 2023-05-19T06:38:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"object-detection",
"vision",
"generated_from_trainer",
"DocLayNet",
"COCO",
"PDF",
"IBM",
"Financial-Reports",
"Finance",
"Manuals",
"Scientific-Articles",
"Science",
"Laws",
"Law",
"Regulations",
... | token-classification | pierreguillou | null | null | pierreguillou/layout-xlm-base-finetuned-with-DocLayNet-base-at-linelevel-ml384 | 2 | 511 | transformers | 2023-03-02T12:49:44 | ---
language:
- multilingual
- en
- de
- fr
- ja
license: mit
tags:
- object-detection
- vision
- generated_from_trainer
- DocLayNet
- COCO
- PDF
- IBM
- Financial-Reports
- Finance
- Manuals
- Scientific-Articles
- Science
- Laws
- Law
- Regulations
- Patents
- Government-Tenders
- object-detection
- image-segmentation
- token-classification
inference: false
datasets:
- pierreguillou/DocLayNet-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layout-xlm-base-finetuned-with-DocLayNet-base-at-linelevel-ml384
results:
- task:
name: Token Classification
type: token-classification
metrics:
- name: f1
type: f1
value: 0.7336
- name: accuracy
type: accuracy
value: 0.9373
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Document Understanding model (finetuned LayoutXLM base at line level on DocLayNet base)
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) with the [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2364
- Precision: 0.7260
- Recall: 0.7415
- F1: 0.7336
- Accuracy: 0.9373
## References
### Blog posts
- Layout XLM base
- (03/05/2023) [Document AI | Inference APP and fine-tuning notebook for Document Understanding at line level with LayoutXLM base]()
- LiLT base
- (02/16/2023) [Document AI | Inference APP and fine-tuning notebook for Document Understanding at paragraph level](https://medium.com/@pierre_guillou/document-ai-inference-app-and-fine-tuning-notebook-for-document-understanding-at-paragraph-level-c18d16e53cf8)
- (02/14/2023) [Document AI | Inference APP for Document Understanding at line level](https://medium.com/@pierre_guillou/document-ai-inference-app-for-document-understanding-at-line-level-a35bbfa98893)
- (02/10/2023) [Document AI | Document Understanding model at line level with LiLT, Tesseract and DocLayNet dataset](https://medium.com/@pierre_guillou/document-ai-document-understanding-model-at-line-level-with-lilt-tesseract-and-doclaynet-dataset-347107a643b8)
- (01/31/2023) [Document AI | DocLayNet image viewer APP](https://medium.com/@pierre_guillou/document-ai-doclaynet-image-viewer-app-3ac54c19956)
- (01/27/2023) [Document AI | Processing of DocLayNet dataset to be used by layout models of the Hugging Face hub (finetuning, inference)](https://medium.com/@pierre_guillou/document-ai-processing-of-doclaynet-dataset-to-be-used-by-layout-models-of-the-hugging-face-hub-308d8bd81cdb)
### Notebooks (paragraph level)
- LiLT base
- [Document AI | Inference APP at paragraph level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/Gradio_inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levelparagraphs_ml512.ipynb)
- [Document AI | Inference at paragraph level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levelparagraphs_ml512.ipynb)
- [Document AI | Fine-tune LiLT on DocLayNet base in any language at paragraph level (chunk of 512 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_paragraphlevel_ml_512.ipynb)
### Notebooks (line level)
- Layout XLM base
- [Document AI | Inference at line level with a Document Understanding model (LayoutXLM base fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LayoutXLM_base_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Inference APP at line level with a Document Understanding model (LayoutXLM base fine-tuned on DocLayNet base dataset)](https://github.com/piegu/language-models/blob/master/Gradio_inference_on_LayoutXLM_base_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Fine-tune LayoutXLM base on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LayoutXLM_base_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb)
- LiLT base
- [Document AI | Inference at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Inference APP at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/Gradio_inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Fine-tune LiLT on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb)
- [DocLayNet image viewer APP](https://github.com/piegu/language-models/blob/master/DocLayNet_image_viewer_APP.ipynb)
- [Processing of DocLayNet dataset to be used by layout models of the Hugging Face hub (finetuning, inference)](processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb)
### APP
You can test this model with this APP in Hugging Face Spaces: [Inference APP for Document Understanding at line level (v2)](https://huggingface.co/spaces/pierreguillou/Inference-APP-Document-Understanding-at-linelevel-v2).

### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
## Model description
The model was finetuned at **line level on chunk of 384 tokens with overlap of 128 tokens**. Thus, the model was trained with all layout and text data of all pages of the dataset.
At inference time, a calculation of best probabilities give the label to each line bounding boxes.
## Inference
See notebook: [Document AI | Inference at line level with a Document Understanding model (LayoutXLM base fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LayoutXLM_base_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
## Training and evaluation data
See notebook: [Document AI | Fine-tune LayoutXLM base on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LayoutXLM_base_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| No log | 0.12 | 300 | 0.8413 | 0.1311 | 0.5185 | 0.1437 | 0.1205 |
| 0.9231 | 0.25 | 600 | 0.8751 | 0.5031 | 0.4108 | 0.4637 | 0.5498 |
| 0.9231 | 0.37 | 900 | 0.8887 | 0.5206 | 0.3911 | 0.5076 | 0.5343 |
| 0.369 | 0.5 | 1200 | 0.8724 | 0.5365 | 0.4118 | 0.5094 | 0.5667 |
| 0.2737 | 0.62 | 1500 | 0.8960 | 0.6033 | 0.3328 | 0.6046 | 0.6020 |
| 0.2737 | 0.75 | 1800 | 0.9186 | 0.6404 | 0.2984 | 0.6062 | 0.6787 |
| 0.2542 | 0.87 | 2100 | 0.9163 | 0.6593 | 0.3115 | 0.6324 | 0.6887 |
| 0.2542 | 1.0 | 2400 | 0.9198 | 0.6537 | 0.2878 | 0.6160 | 0.6962 |
| 0.1938 | 1.12 | 2700 | 0.9165 | 0.6752 | 0.3414 | 0.6673 | 0.6833 |
| 0.1581 | 1.25 | 3000 | 0.9193 | 0.6871 | 0.3611 | 0.6868 | 0.6875 |
| 0.1581 | 1.37 | 3300 | 0.9256 | 0.6822 | 0.2763 | 0.6988 | 0.6663 |
| 0.1428 | 1.5 | 3600 | 0.9287 | 0.7084 | 0.3065 | 0.7246 | 0.6929 |
| 0.1428 | 1.62 | 3900 | 0.9194 | 0.6812 | 0.2942 | 0.6866 | 0.6760 |
| 0.1025 | 1.74 | 4200 | 0.9347 | 0.7223 | 0.2990 | 0.7315 | 0.7133 |
| 0.1225 | 1.87 | 4500 | 0.9360 | 0.7048 | 0.2729 | 0.7249 | 0.6858 |
| 0.1225 | 1.99 | 4800 | 0.9396 | 0.7222 | 0.2826 | 0.7497 | 0.6966 |
| 0.108 | 2.12 | 5100 | 0.9301 | 0.7193 | 0.3071 | 0.7022 | 0.7372 |
| 0.108 | 2.24 | 5400 | 0.9334 | 0.7243 | 0.2999 | 0.7250 | 0.7237 |
| 0.0799 | 2.37 | 5700 | 0.9382 | 0.7254 | 0.2710 | 0.7310 | 0.7198 |
| 0.0793 | 2.49 | 6000 | 0.9329 | 0.7228 | 0.3201 | 0.7352 | 0.7108 |
| 0.0793 | 2.62 | 6300 | 0.9373 | 0.7336 | 0.3035 | 0.7260 | 0.7415 |
| 0.0696 | 2.74 | 6600 | 0.9374 | 0.7275 | 0.3137 | 0.7313 | 0.7237 |
| 0.0696 | 2.87 | 6900 | 0.9381 | 0.7253 | 0.3242 | 0.7369 | 0.7142 |
| 0.0866 | 2.99 | 7200 | 0.2473 | 0.7439 | 0.7207 | 0.7321 | 0.9407 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.10.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
## Other models
- Line level
- [Document Understanding model (finetuned LiLT base at line level on DocLayNet base)](https://huggingface.co/pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-linelevel-ml384) (accuracy | tokens: 85.84% - lines: 91.97%)
- [Document Understanding model (finetuned LayoutXLM base at line level on DocLayNet base)](https://huggingface.co/pierreguillou/layout-xlm-base-finetuned-with-DocLayNet-base-at-linelevel-ml384) (accuracy | tokens: 93.73% - lines: ...)
- Paragraph level
- [Document Understanding model (finetuned LiLT base at paragraph level on DocLayNet base)](https://huggingface.co/pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-paragraphlevel-ml512) (accuracy | tokens: 86.34% - paragraphs: 68.15%)
- [Document Understanding model (finetuned LayoutXLM base at paragraph level on DocLayNet base)](https://huggingface.co/pierreguillou/layout-xlm-base-finetuned-with-DocLayNet-base-at-paragraphlevel-ml512) (accuracy | tokens: 96.93% - paragraphs: 86.55%) | 11,870 | [
[
-0.034149169921875,
-0.05731201171875,
0.032135009765625,
0.01241302490234375,
0.00010591745376586914,
-0.033935546875,
-0.005191802978515625,
-0.03302001953125,
0.00423431396484375,
0.03515625,
-0.03460693359375,
-0.0601806640625,
-0.037445068359375,
-0.013... |
timm/vit_small_patch14_reg4_dinov2.lvd142m | 2023-10-30T04:55:33.000Z | [
"timm",
"pytorch",
"safetensors",
"arxiv:2309.16588",
"arxiv:2304.07193",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null | timm | null | null | timm/vit_small_patch14_reg4_dinov2.lvd142m | 0 | 511 | timm | 2023-10-30T04:53:09 | ---
tags:
- timm
library_name: timm
license: apache-2.0
---
# Model card for vit_small_patch14_reg4_dinov2.lvd142m
A Vision Transformer (ViT) image feature model with registers. Pretrained on LVD-142M with self-supervised DINOv2 method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.1
- GMACs: 29.6
- Activations (M): 57.5
- Image size: 518 x 518
- **Papers:**
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Original:** https://github.com/facebookresearch/dinov2
- **Pretrain Dataset:** LVD-142M
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_small_patch14_reg4_dinov2.lvd142m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_small_patch14_reg4_dinov2.lvd142m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1374, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
``` | 4,305 | [
[
-0.03790283203125,
-0.0240936279296875,
0.00992584228515625,
0.00244140625,
-0.032135009765625,
-0.0269317626953125,
-0.018798828125,
-0.034271240234375,
0.0101318359375,
0.019378662109375,
-0.034423828125,
-0.037506103515625,
-0.04925537109375,
-0.003854751... |
sshleifer/distilbart-xsum-12-3 | 2021-06-14T07:57:16.000Z | [
"transformers",
"pytorch",
"jax",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | summarization | sshleifer | null | null | sshleifer/distilbart-xsum-12-3 | 10 | 510 | transformers | 2022-03-02T23:29:05 | ---
language: en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
- xsum
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
| 1,705 | [
[
-0.044097900390625,
-0.023468017578125,
0.0386962890625,
0.026702880859375,
-0.0132598876953125,
0.01514434814453125,
0.013519287109375,
-0.0012292861938476562,
0.0157012939453125,
0.02886962890625,
-0.0628662109375,
-0.039337158203125,
-0.0546875,
-0.011627... |
seastar105/whisper-small-ko-zeroth | 2022-12-11T07:42:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"whisper-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | seastar105 | null | null | seastar105/whisper-small-ko-zeroth | 2 | 510 | transformers | 2022-12-11T00:49:45 | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
- whisper-event
datasets:
- kresnik/zeroth_korean
metrics:
- wer
model-index:
- name: Whisper Small Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
config: clean
split: test
args: 'split: test'
metrics:
- name: Wer
type: wer
value: 6.761029965366662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Korean
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Zeroth Korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0899
- Wer: 6.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1277 | 0.72 | 1000 | 0.1489 | 12.2271 |
| 0.0379 | 1.44 | 2000 | 0.1053 | 6.7159 |
| 0.0138 | 2.16 | 3000 | 0.0918 | 6.0382 |
| 0.0141 | 2.87 | 4000 | 0.0899 | 6.7610 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.7.1
- Tokenizers 0.13.2
| 1,992 | [
[
-0.0270233154296875,
-0.0360107421875,
0.010650634765625,
0.00971221923828125,
-0.0268707275390625,
-0.0377197265625,
-0.0209808349609375,
-0.02264404296875,
0.016082763671875,
0.0253143310546875,
-0.052886962890625,
-0.04248046875,
-0.042205810546875,
-0.01... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.