modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ai-forever/mGPT-1.3B-tajik | 2023-08-11T08:02:34.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"mgpt",
"tg",
"en",
"ru",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | ai-forever | null | null | ai-forever/mGPT-1.3B-tajik | 2 | 2 | transformers | 2023-08-10T05:12:02 | ---
language:
- tg
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# 🇹🇯 Tajik mGPT 1.3B
Language model for Tajik. Model has 1.3B parameters as you can guess from it's name.
Tajik belongs to Indo-European language family. It's a very fluid language with approximately 8.2 million speakers. Here are some facts about it:
1. It is a variety of Persian spoken in Tajikistan and parts of Afghanistan.
2. Unlike Persian in Iran, Tajik is written in the Cyrillic script due to Soviet influence.
3. The Pamiri languages of Tajikistans Gorno-Badakhshan Autonomous Region are closely related to Tajik.
## Technical details
It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Tajik mGPT 1.3B** was trained for another 50000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.
Final perplexity for this model on validation is **6.52**.
_Chart of the training loss and perplexity:_

## Other mGPT-1.3B models
- [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [🌞 mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
- [🇰🇿 mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
- [🇰🇬 mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [🇷🇴 mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
- [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
- [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [🇺🇦 mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
- [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)
## Feedback
If you'll found a bug of have additional data to train model on your language — please, give us feedback.
Model will be improved over time. Stay tuned!
| 3,160 | [
[
-0.045654296875,
-0.037200927734375,
0.0197601318359375,
0.031829833984375,
-0.0309906005859375,
0.01520538330078125,
-0.0205230712890625,
-0.047760009765625,
0.035003662109375,
0.0170440673828125,
-0.054229736328125,
-0.050628662109375,
-0.0467529296875,
0.... |
SaurabhArora/vehicle_defects_2 | 2023-08-16T04:30:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"dataset:chittaranjankhatua/car_damage_pub",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | SaurabhArora | null | null | SaurabhArora/vehicle_defects_2 | 0 | 2 | transformers | 2023-08-10T06:27:12 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vehicle_defects_2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7204301357269287
datasets:
- chittaranjankhatua/car_damage_pub
---
# vehicle_defects_2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### broken headlight

#### damaged oil filter

#### damaged windscreen

#### deflated tire

#### worn out tire tread
 | 1,067 | [
[
-0.040740966796875,
-0.036346435546875,
0.040435791015625,
0.032470703125,
-0.035369873046875,
0.0224609375,
0.0271759033203125,
-0.04107666015625,
0.00730133056640625,
0.0192108154296875,
-0.04071044921875,
-0.028106689453125,
-0.032684326171875,
-0.0252532... |
Evan-Lin/Bart-abs-yelp-entailment-20 | 2023-08-11T06:55:33.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Evan-Lin | null | null | Evan-Lin/Bart-abs-yelp-entailment-20 | 0 | 2 | transformers | 2023-08-10T08:36:52 | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpo2a0v6b6/Evan-Lin/Bart-abs-yelp-entailment")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpo2a0v6b6/Evan-Lin/Bart-abs-yelp-entailment")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpo2a0v6b6/Evan-Lin/Bart-abs-yelp-entailment")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
| 1,261 | [
[
-0.00679779052734375,
-0.059967041015625,
0.0271759033203125,
0.00887298583984375,
-0.0244140625,
0.005558013916015625,
0.0014972686767578125,
-0.021148681640625,
0.00585174560546875,
0.034454345703125,
-0.05633544921875,
-0.03802490234375,
-0.0294647216796875,
... |
bigcode/santacoderpack | 2023-08-16T08:37:40.000Z | [
"transformers",
"pytorch",
"gpt_bigcode",
"fill-mask",
"code",
"text-generation",
"dataset:bigcode/commitpack-subset-cf",
"arxiv:2308.07124",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | bigcode | null | null | bigcode/santacoderpack | 5 | 2 | transformers | 2023-08-10T14:35:42 | ---
pipeline_tag: text-generation
inference: true
widget:
- text: '<commit_before>def has_close_elements(numbers: List[float], threshold: float) -> bool:\n for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = elem - elem2\n if distance < threshold:\n return True\n\n return False<commit_message>Fix bugs in has_close_elements.<commit_after>'
example_title: Fix has_close_elements
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/commitpack-subset-cf
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: SantaCoderPack
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix Python
metrics:
- name: pass@1
type: pass@1
value: 3.2
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix JavaScript
metrics:
- name: pass@1
type: pass@1
value: 4.9
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix Java
metrics:
- name: pass@1
type: pass@1
value: 1.8
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix Go
metrics:
- name: pass@1
type: pass@1
value: 3.6
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix C++
metrics:
- name: pass@1
type: pass@1
value: 4.2
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix Rust
metrics:
- name: pass@1
type: pass@1
value: 1.7
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix Average
metrics:
- name: pass@1
type: pass@1
value: 3.3
verified: false
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Training](#training)
4. [Citation](#citation)
# Model Summary
SantaCoderPack is an pre-trained model with the same architecture of SantaCoder on <th><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a> using this format:
```
<commit_before>code_before<commit_msg>message<commit_after>code_after
```
- **Repository:** [bigcode/octopack](https://github.com/bigcode-project/octopack)
- **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124)
- **Languages:** Python, JavaScript, Java, C++, Go, Rust
- **SantaCoderPack:**
<table>
<tr>
<th>Data</t>
<th><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></th>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th>Model</t>
<th><a href=https://huggingface.co/bigcode/octocoder>SantaCoderPack</a></th>
<td>SantaCoderPack (1.1B parameters) pre-trained on CommitPack</td>
</tr>
<tr>
<th>Evaluation </t>
<th><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack/HumanEvalFix</a></th>
<td>Extension of OpenAI's HumanEval to HumanEvalFix</td>
</tr>
</table>
# Use
## Intended use
The model follows instructions provided in the input. We recommend prefacing your input with "<commit_before>def has_close_elements(numbers: List[float], threshold: float) -> bool:\n for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = elem - elem2\n if distance < threshold:\n return True\n\n return False<commit_message>Fix bugs in has_close_elements.<commit_after>"
**Feel free to share your generations in the Community tab!**
## Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/santacoderpack"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Q<commit_before>def has_close_elements(numbers: List[float], threshold: float) -> bool:\n for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = elem - elem2\n if distance < threshold:\n return True\n\n return False<commit_message>Fix bugs in has_close_elements.<commit_after>", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention
- **Steps:** 250k pretraining
- **Pretraining tokens:** 131B
- **Precision:** bfloat16
## Hardware
- **Pretraining:**
- **GPUs:** 32 Tesla A100
- **Training time:** 15 days
## Software
- **Orchestration:** [Megatron-LM/Transformers](https://github.com/bigcode-project/santacoderpack#training)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
# Citation
```bibtex
@article{muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
journal={arXiv preprint arXiv:2308.07124},
year={2023}
}
``` | 5,805 | [
[
-0.028167724609375,
-0.033203125,
0.021881103515625,
0.0120391845703125,
-0.0099029541015625,
-0.0250091552734375,
-0.029632568359375,
-0.03460693359375,
0.0006756782531738281,
0.0171051025390625,
-0.03936767578125,
-0.0438232421875,
-0.040557861328125,
0.00... |
TheBloke/orca_mini_v3_13B-GGML | 2023-09-27T13:01:15.000Z | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"arxiv:2306.02707",
"license:llama2",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/orca_mini_v3_13B-GGML | 12 | 2 | transformers | 2023-08-10T15:50:29 | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
model_name: Orca Mini v3 13B
inference: false
model_creator: Pankaj Mathur
model_link: https://huggingface.co/psmathur/orca_mini_v3_13b
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
base_model: psmathur/orca_mini_v3_13b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Orca Mini v3 13B - GGML
- Model creator: [Pankaj Mathur](https://huggingface.co/psmathur)
- Original model: [Orca Mini v3 13B](https://huggingface.co/psmathur/orca_mini_v3_13b)
## Description
This repo contains GGML format model files for [Pankaj Mathur's Orca Mini v3 13B](https://huggingface.co/psmathur/orca_mini_v3_13b).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v3_13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML)
* [Pankaj Mathur's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v3_13b)
## Prompt template: orca_mini
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
{prompt}
### Input:
{input}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [orca_mini_v3_13b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q2_K.bin) | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [orca_mini_v3_13b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [orca_mini_v3_13b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [orca_mini_v3_13b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [orca_mini_v3_13b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.37 GB| 9.87 GB | Original quant method, 4-bit. |
| [orca_mini_v3_13b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [orca_mini_v3_13b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [orca_mini_v3_13b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.17 GB| 10.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [orca_mini_v3_13b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.97 GB| 11.47 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [orca_mini_v3_13b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [orca_mini_v3_13b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [orca_mini_v3_13b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.78 GB| 12.28 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [orca_mini_v3_13b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q6_K.bin) | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [orca_mini_v3_13b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML/blob/main/orca_mini_v3_13b.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.79 GB| 16.29 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m orca_mini_v3_13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n### User:\nWrite a story about llamas\n\n### Input:\nIn which the llamas have a lovely time at the beach\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini v3 13B
# orca_mini_v3_13b
A Llama2-13b model trained on Orca Style datasets.
<br>

<br>
**P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**
<br>
### quantized versions
Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
1) https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML
2) https://huggingface.co/TheBloke/orca_mini_v3_13B-GPTQ
<br>
#### license disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated orca_mini_v3_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||||
|:------:|:--------:|:-------:|:--------:|
|**Task**|**Metric**|**Value**|**Stderr**|
|*arc_challenge*|acc_norm|0.6314|0.0141|
|*hellaswag*|acc_norm|0.8242|0.0038|
|*mmlu*|acc_norm|0.5637|0.0351|
|*truthfulqa_mc*|mc2|0.5127|0.0157|
|**Total Average**|-|**0.6329877193**||
<br>
## Example Usage
Here is the prompt format
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me about Orcas.
### Assistant:
```
Below shows a code example on how to use this model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_13b")
model = AutoModelForCausalLM.from_pretrained(
"psmathur/orca_mini_v3_13b",
torch_dtype=torch.float16,
load_in_8bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{orca_mini_v3_13b,
author = {Pankaj Mathur},
title = {orca_mini_v3_13b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_13b},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
| 18,851 | [
[
-0.040679931640625,
-0.06744384765625,
0.0163421630859375,
0.0035228729248046875,
-0.02862548828125,
-0.014556884765625,
0.00302886962890625,
-0.05291748046875,
0.0288238525390625,
0.0053558349609375,
-0.041595458984375,
-0.042022705078125,
-0.0308380126953125,
... |
suyash2102/model-en-to-fr | 2023-08-15T18:22:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | translation | suyash2102 | null | null | suyash2102/model-en-to-fr | 0 | 2 | transformers | 2023-08-10T17:11:11 | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: model-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Introduction
I have made a working user interactive Gradio Language Translation model which translates any English sentence into French sentence. For this i have fine tuned a pre trained model which i have used from HuggingFace.
Make a local directory of this repo in your computer and then in your terminal switch to the directory of this repo on your computer and then just type python gradio_LT.py to start the user interface for translation.
# model-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
I have used this model to translate english sentences to French.
## Model description
I have used the inbuilt features of transformers to make this model. The model is made from AutoModelForSeq2SeqLM and i have tokenized the dataset accoding to the pre trained model.
## Training and evaluation data
I have used the Sacrebleu method to evaluate my model which is generally used in language translation. It compares the number of common words in predicted and correct output and then gives its correctness.
## Training procedure
I have used the Seq2SeqTrainer function to train my dataset over the pre trained model.The specific parameters are given below which i have used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Gradio Interface
I have made a separate file gradio_LT.py. By running this file you will directly see a gradio user interface through which you translate sentences. The only prerequisite is that transformers, gradio, sentencepiece should be pre downloaded in your environment.
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,315 | [
[
-0.0260467529296875,
-0.0460205078125,
0.02044677734375,
0.022247314453125,
-0.0243988037109375,
-0.033935546875,
-0.018157958984375,
-0.0269317626953125,
0.01177215576171875,
0.054168701171875,
-0.031951904296875,
-0.02447509765625,
-0.06683349609375,
0.014... |
weiren119/traditional_chinese_qlora_llama2_13b_merged | 2023-08-11T12:02:07.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | weiren119 | null | null | weiren119/traditional_chinese_qlora_llama2_13b_merged | 0 | 2 | transformers | 2023-08-11T00:37:57 | ---
license: apache-2.0
---
# Traditional Chinese QLoRa Llama2 13b
- Github repo: https://github.com/MIBlue119/traditional_chinese_llama2/
- This is a practice to finetune Llama2 on traditional chinese instruction dataset at Llama2 chat model.
- Use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours.
Thanks for these references:
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
- [Chinese Llama 2 7B train.py](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b/blob/main/train.py)
- [Load the pretrained model in 4-bit precision and Set training with LoRA according to hf's trl lib](https://github.com/lvwerra/trl/blob/main/examples/scripts/sft_trainer.py): QLoRA finetuning
## Resources
- traditional chinese qlora finetuned Llama2 7b merge model: [weiren119/traditional_chinese_qlora_llama2_merged](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2_merged)
- traditional chinese qlora 7b adapter model: [weiren119/traditional_chinese_qlora_llama2](https://huggingface.co/weiren119/traditional_chinese_qlora_llama2)
## Online Demo
- [Run the qlora finetuned model at colab](https://colab.research.google.com/drive/1OYXvhY-8KjEDaGhOLrJe4omjtFgOWjy1?usp=sharing): May need colab pro or colab pro+
## Use which pretrained model
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
## Usage
### Installation dependencies
```
$pip install transformers torch peft
```
#### Run the inference
```
import transformers
import torch
from transformers import AutoTokenizer, TextStreamer
# Use the same tokenizer from the source model
model_id="weiren119/traditional_chinese_qlora_llama2_13b_merged"
tokenizer = AutoTokenizer.from_pretrained(original_model_path, use_fast=False)
# Load fine-tuned model, you can replace this with your own model
model = AutoPeftModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=model_id.endswith("4bit"),
torch_dtype=torch.float16,
device_map='auto'
)
system_prompt = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
def get_prompt(message: str, chat_history: list[tuple[str, str]]) -> str:
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
for user_input, response in chat_history:
texts.append(f'{user_input.strip()} [/INST] {response.strip()} </s><s> [INST] ')
texts.append(f'{message.strip()} [/INST]')
return ''.join(texts)
print ("="*100)
print ("-"*80)
print ("Have a try!")
s = ''
chat_history = []
while True:
s = input("User: ")
if s != '':
prompt = get_prompt(s, chat_history)
print ('Answer:')
tokens = tokenizer(prompt, return_tensors='pt').input_ids
#generate_ids = model.generate(tokens.cuda(), max_new_tokens=4096, streamer=streamer)
generate_ids = model.generate(input_ids=tokens.cuda(), max_new_tokens=4096, streamer=streamer)
output = tokenizer.decode(generate_ids[0, len(tokens[0]):-1]).strip()
chat_history.append([s, output])
print ('-'*80)
``` | 4,068 | [
[
-0.016265869140625,
-0.065185546875,
0.01316070556640625,
0.023712158203125,
-0.0235137939453125,
-0.001659393310546875,
-0.01282501220703125,
-0.030853271484375,
0.003253936767578125,
0.01543426513671875,
-0.046417236328125,
-0.0291748046875,
-0.021865844726562... |
uraskargi/ppo-Pyramids | 2023-08-11T01:20:55.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | uraskargi | null | null | uraskargi/ppo-Pyramids | 0 | 2 | ml-agents | 2023-08-11T01:19:09 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: uraskargi/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,334 | [
[
-0.04022216796875,
-0.034515380859375,
0.0017795562744140625,
0.01436614990234375,
-0.01119232177734375,
0.012054443359375,
0.0167083740234375,
-0.014495849609375,
0.033203125,
0.03033447265625,
-0.04046630859375,
-0.050537109375,
-0.029083251953125,
-0.0148... |
smjain/abap-nous-hermes | 2023-08-11T04:23:09.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:smjain/abap",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | smjain | null | null | smjain/abap-nous-hermes | 0 | 2 | transformers | 2023-08-11T01:38:14 | ---
license: apache-2.0
datasets:
- smjain/abap
language:
- en
---
This model is fine tuned on a very small ABAP dataset . Have used NousResearch/Llama-2-7b-chat-hf as the base model.
Sample code
from transformers import pipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "smjain/abap-nous-hermes"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained('NousResearch/llama-2-7b-chat-hf')
prompt = "Write a sample ABAP report" # change to your desired prompt
gen = pipeline('text-generation', model=model, tokenizer=tokenizer,max_new_tokens=256)
result = gen(prompt)
print(result[0]['generated_text']) | 688 | [
[
-0.01654052734375,
-0.03228759765625,
0.01947021484375,
0.014129638671875,
-0.01568603515625,
-0.018798828125,
0.00928497314453125,
-0.0152130126953125,
0.01062774658203125,
0.05206298828125,
-0.06109619140625,
-0.033599853515625,
-0.037689208984375,
0.03182... |
Carmesix/Sentiment_Analysis_12000sample | 2023-08-11T04:21:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Carmesix | null | null | Carmesix/Sentiment_Analysis_12000sample | 0 | 2 | transformers | 2023-08-11T03:05:09 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: Sentiment_Analysis_12000sample
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.917
- name: F1
type: f1
value: 0.9187071498530852
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment_Analysis_12000sample
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2455
- Accuracy: 0.917
- F1: 0.9187
## Model description
Model Intended to validate Sentiment Analysis on comments. Made by Alan Perez.
## Intended uses & limitations
Limited testing and is for showcase
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,612 | [
[
-0.045074462890625,
-0.0450439453125,
0.01334381103515625,
0.0181884765625,
-0.0391845703125,
-0.00421142578125,
-0.00922393798828125,
-0.0010805130004882812,
0.025115966796875,
0.0228271484375,
-0.059967041015625,
-0.046875,
-0.063720703125,
0.0006365776062... |
unionai/Llama-2-13b-LoRA-alpaca-cleaned | 2023-11-06T08:52:58.000Z | [
"peft",
"region:us"
] | null | unionai | null | null | unionai/Llama-2-13b-LoRA-alpaca-cleaned | 0 | 2 | peft | 2023-08-11T06:10:19 | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
| 861 | [
[
-0.05059814453125,
-0.053619384765625,
0.0235595703125,
0.03619384765625,
-0.041656494140625,
0.006572723388671875,
0.007068634033203125,
-0.0235595703125,
-0.01059722900390625,
0.027008056640625,
-0.04925537109375,
-0.017608642578125,
-0.036773681640625,
0.... |
muhammadravi251001/fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large | 2023-08-14T04:47:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | muhammadravi251001 | null | null | muhammadravi251001/fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large | 0 | 2 | transformers | 2023-08-11T06:40:57 | ---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4428
- Accuracy: 0.8439
- F1: 0.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4595 | 0.5 | 3654 | 0.4630 | 0.8064 | 0.8089 |
| 0.4138 | 1.0 | 7308 | 0.4497 | 0.8146 | 0.8165 |
| 0.3748 | 1.5 | 10962 | 0.4280 | 0.8420 | 0.8422 |
| 0.3687 | 2.0 | 14616 | 0.4161 | 0.8363 | 0.8376 |
| 0.3265 | 2.5 | 18270 | 0.4209 | 0.8459 | 0.8465 |
| 0.3392 | 3.0 | 21924 | 0.4107 | 0.8459 | 0.8453 |
| 0.2928 | 3.5 | 25578 | 0.4479 | 0.8395 | 0.8401 |
| 0.2975 | 4.0 | 29232 | 0.4428 | 0.8439 | 0.8445 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,026 | [
[
-0.0355224609375,
-0.04168701171875,
0.0240325927734375,
-0.00440216064453125,
-0.024383544921875,
-0.016754150390625,
-0.020904541015625,
-0.01428985595703125,
0.01190948486328125,
0.03509521484375,
-0.0550537109375,
-0.0516357421875,
-0.047882080078125,
-0... |
TheBloke/Platypus2-70B-GGML | 2023-09-27T13:01:18.000Z | [
"transformers",
"llama",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Platypus2-70B-GGML | 9 | 2 | transformers | 2023-08-11T06:48:18 | ---
language:
- en
license: llama2
datasets:
- garage-bAInd/Open-Platypus
model_name: Platypus2 70B
inference: false
model_creator: garage-bAInd
model_link: https://huggingface.co/garage-bAInd/Platypus2-70B
model_type: llama
quantized_by: TheBloke
base_model: garage-bAInd/Platypus2-70B
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Platypus2 70B - GGML
- Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd)
- Original model: [Platypus2 70B](https://huggingface.co/garage-bAInd/Platypus2-70B)
## Description
This repo contains GGML format model files for [garage-bAInd's Platypus2 70B](https://huggingface.co/garage-bAInd/Platypus2-70B).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
* [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
* [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Platypus2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Platypus2-70B-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Platypus2-70B-GGML)
* [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Platypus2-70B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
For compatibility with latest llama.cpp, please use GGUF files instead.
Or one of the other tools and libraries listed above.
To use in llama.cpp, you must add `-gqa 8` argument.
For other UIs and libraries, please check the docs.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [platypus2-70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [platypus2-70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [platypus2-70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [platypus2-70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [platypus2-70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
| [platypus2-70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [platypus2-70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [platypus2-70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [platypus2-70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [platypus2-70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [platypus2-70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Platypus2-70B-GGML/blob/main/platypus2-70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 40 -gqa 8 -m platypus2-70b.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
Remember the `-gqa 8` argument, required for Llama 70B models.
Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: garage-bAInd's Platypus2 70B
# Platypus2-70B
Platypus-70B is an instruction fine-tuned model based on the LLaMa2-70B transformer architecture.

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 70.48 |
| ARC (25-shot) | 71.84 |
| HellaSwag (10-shot) | 87.94 |
| TruthfulQA (0-shot) | 62.26 |
| Avg. | 73.13 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus2-70B** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-70B` was instruction fine-tuned using LoRA on 8 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to repo directory
cd lm-evaluation-harness
# install
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
| 18,353 | [
[
-0.042999267578125,
-0.06024169921875,
0.03167724609375,
0.02374267578125,
-0.03326416015625,
-0.0037479400634765625,
-0.01078033447265625,
-0.031982421875,
0.0280609130859375,
0.004425048828125,
-0.042266845703125,
-0.038116455078125,
-0.035369873046875,
0.... |
anth0nyhak1m/demo_model | 2023-08-11T08:34:52.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | anth0nyhak1m | null | null | anth0nyhak1m/demo_model | 0 | 2 | transformers | 2023-08-11T08:33:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: demo_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3071
- Accuracy: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2576 | 1.0 | 4298 | 0.2377 | 0.9363 |
| 0.1865 | 2.0 | 8596 | 0.2192 | 0.9463 |
| 0.1306 | 3.0 | 12894 | 0.2071 | 0.9525 |
| 0.0954 | 4.0 | 17192 | 0.2278 | 0.9522 |
| 0.0734 | 5.0 | 21490 | 0.2453 | 0.9534 |
| 0.0568 | 6.0 | 25788 | 0.2612 | 0.9541 |
| 0.0427 | 7.0 | 30086 | 0.2736 | 0.9567 |
| 0.0332 | 8.0 | 34384 | 0.2861 | 0.9559 |
| 0.0296 | 9.0 | 38682 | 0.3014 | 0.9552 |
| 0.0198 | 10.0 | 42980 | 0.3071 | 0.9556 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,895 | [
[
-0.0341796875,
-0.04193115234375,
0.01488494873046875,
0.008514404296875,
-0.020843505859375,
-0.0218048095703125,
-0.0029850006103515625,
-0.00553131103515625,
0.01291656494140625,
0.017852783203125,
-0.052001953125,
-0.052459716796875,
-0.05364990234375,
-... |
Jingya/tiny-stable-diffusion-torch | 2023-08-11T10:32:59.000Z | [
"diffusers",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | Jingya | null | null | Jingya/tiny-stable-diffusion-torch | 0 | 2 | diffusers | 2023-08-11T10:30:16 | ---
license: apache-2.0
duplicated_from: hf-internal-testing/tiny-stable-diffusion-torch
---
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("Jingya/tiny-stable-diffusion-torch")
```
| 241 | [
[
-0.000885009765625,
-0.036590576171875,
0.01386260986328125,
0.0242462158203125,
-0.025665283203125,
0.010009765625,
0.02130126953125,
0.0374755859375,
0.00815582275390625,
0.0239715576171875,
-0.0258026123046875,
-0.007663726806640625,
-0.0411376953125,
-0.... |
caiAtSNU/PyramidsRND | 2023-08-11T12:53:24.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | caiAtSNU | null | null | caiAtSNU/PyramidsRND | 0 | 2 | ml-agents | 2023-08-11T12:53:20 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: caiAtSNU/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,332 | [
[
-0.0411376953125,
-0.035430908203125,
0.0016927719116210938,
0.01409149169921875,
-0.01019287109375,
0.0128631591796875,
0.0165863037109375,
-0.01541900634765625,
0.033599853515625,
0.03033447265625,
-0.04150390625,
-0.049774169921875,
-0.0297088623046875,
-... |
yogjoshi14/ppo-Pyramids_Training | 2023-08-11T13:29:03.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | yogjoshi14 | null | null | yogjoshi14/ppo-Pyramids_Training | 0 | 2 | ml-agents | 2023-08-11T13:29:01 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yogjoshi14/ppo-Pyramids_Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,344 | [
[
-0.040374755859375,
-0.034088134765625,
0.0012187957763671875,
0.01409912109375,
-0.01116180419921875,
0.01209259033203125,
0.0166015625,
-0.01568603515625,
0.033294677734375,
0.0288848876953125,
-0.040771484375,
-0.049346923828125,
-0.029541015625,
-0.01453... |
dhinman/ppo-PyramidsTraining | 2023-08-11T14:22:56.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | dhinman | null | null | dhinman/ppo-PyramidsTraining | 0 | 2 | ml-agents | 2023-08-11T14:22:52 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dhinman/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,340 | [
[
-0.039581298828125,
-0.03387451171875,
0.0018930435180664062,
0.01355743408203125,
-0.0105438232421875,
0.01253509521484375,
0.017822265625,
-0.01367950439453125,
0.032470703125,
0.03045654296875,
-0.04034423828125,
-0.04937744140625,
-0.029144287109375,
-0.... |
Vegeballoon/videomae-base-ssv2-finetuned-mixed-v2-label-smoothing-focalloss0.7_5-less-strict | 2023-08-12T23:49:23.000Z | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | Vegeballoon | null | null | Vegeballoon/videomae-base-ssv2-finetuned-mixed-v2-label-smoothing-focalloss0.7_5-less-strict | 0 | 2 | transformers | 2023-08-11T17:20:41 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-ssv2-finetuned-mixed-v2-label-smoothing-focalloss0.7_5-less-strict
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-ssv2-finetuned-mixed-v2-label-smoothing-focalloss0.7_5-less-strict
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-base-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0029
- Accuracy: 0.8743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 18140
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0008 | 0.05 | 908 | 0.0010 | 0.8276 |
| 0.0007 | 1.05 | 1816 | 0.0011 | 0.8223 |
| 0.0005 | 2.05 | 2724 | 0.0010 | 0.8725 |
| 0.0007 | 3.05 | 3632 | 0.0017 | 0.7469 |
| 0.0005 | 4.05 | 4540 | 0.0010 | 0.8366 |
| 0.0008 | 5.05 | 5448 | 0.0009 | 0.8492 |
| 0.0003 | 6.05 | 6356 | 0.0010 | 0.8654 |
| 0.0004 | 7.05 | 7264 | 0.0014 | 0.8600 |
| 0.0007 | 8.05 | 8172 | 0.0014 | 0.8438 |
| 0.0003 | 9.05 | 9080 | 0.0014 | 0.8671 |
| 0.0004 | 10.05 | 9988 | 0.0028 | 0.8321 |
| 0.0002 | 11.05 | 10896 | 0.0018 | 0.8752 |
| 0.0002 | 12.05 | 11804 | 0.0019 | 0.8645 |
| 0.0003 | 13.05 | 12712 | 0.0014 | 0.8752 |
| 0.0001 | 14.05 | 13620 | 0.0021 | 0.8609 |
| 0.0003 | 15.05 | 14528 | 0.0021 | 0.8627 |
| 0.0002 | 16.05 | 15436 | 0.0021 | 0.8779 |
| 0.0001 | 17.05 | 16344 | 0.0027 | 0.8797 |
| 0.0001 | 18.05 | 17252 | 0.0029 | 0.8725 |
| 0.0001 | 19.05 | 18140 | 0.0029 | 0.8743 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,768 | [
[
-0.04498291015625,
-0.044677734375,
0.01326751708984375,
0.002643585205078125,
-0.0205841064453125,
-0.017791748046875,
-0.0009860992431640625,
-0.0018720626831054688,
0.035247802734375,
0.03009033203125,
-0.0540771484375,
-0.0543212890625,
-0.057769775390625,
... |
stoyky/ppo-Pyramids | 2023-08-11T18:45:36.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | stoyky | null | null | stoyky/ppo-Pyramids | 0 | 2 | ml-agents | 2023-08-11T18:44:57 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: stoyky/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,331 | [
[
-0.039337158203125,
-0.03466796875,
0.0024089813232421875,
0.014007568359375,
-0.0119171142578125,
0.01190185546875,
0.016448974609375,
-0.01505279541015625,
0.034515380859375,
0.029571533203125,
-0.03997802734375,
-0.050506591796875,
-0.02935791015625,
-0.0... |
Bastian1111/ppo-PyramidsRND | 2023-08-11T20:49:47.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | Bastian1111 | null | null | Bastian1111/ppo-PyramidsRND | 0 | 2 | ml-agents | 2023-08-11T20:49:43 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bastian1111/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,339 | [
[
-0.040374755859375,
-0.033782958984375,
0.0022792816162109375,
0.0131988525390625,
-0.01052093505859375,
0.0131988525390625,
0.0167236328125,
-0.014862060546875,
0.032806396484375,
0.03033447265625,
-0.040924072265625,
-0.04949951171875,
-0.0298919677734375,
... |
whywynn/ppo-PyramidsTraining | 2023-08-11T21:28:30.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | whywynn | null | null | whywynn/ppo-PyramidsTraining | 0 | 2 | ml-agents | 2023-08-11T21:28:27 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whywynn/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,340 | [
[
-0.0401611328125,
-0.033599853515625,
0.0019063949584960938,
0.0144195556640625,
-0.01033782958984375,
0.01163482666015625,
0.0175018310546875,
-0.0140380859375,
0.03314208984375,
0.03076171875,
-0.0408935546875,
-0.050018310546875,
-0.029510498046875,
-0.01... |
BauyrjanQ/whisper-kk-speech2ner-b16-ms8k-ss2K-8ep-s-ksc_t | 2023-08-12T20:25:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | BauyrjanQ | null | null | BauyrjanQ/whisper-kk-speech2ner-b16-ms8k-ss2K-8ep-s-ksc_t | 0 | 2 | transformers | 2023-08-11T23:04:54 | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk-speech2ner-b16-ms4k-ss2K-8ep-wlv2-ksc_t
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk-speech2ner-b16-ms4k-ss2K-8ep-wlv2-ksc_t
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0557
- Wer: 18.6267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0218 | 0.22 | 2000 | 0.0644 | 20.8004 |
| 0.0176 | 0.43 | 4000 | 0.0633 | 17.3922 |
| 0.0147 | 0.65 | 6000 | 0.0576 | 18.1864 |
| 0.0117 | 0.87 | 8000 | 0.0557 | 18.6267 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
| 1,549 | [
[
-0.0164947509765625,
-0.03173828125,
0.0131683349609375,
0.014801025390625,
-0.0212249755859375,
-0.0275421142578125,
-0.0120391845703125,
-0.0105438232421875,
-0.00708770751953125,
0.0186767578125,
-0.059112548828125,
-0.048370361328125,
-0.050811767578125,
... |
aeolian83/poly-ko-1.3b-translate | 2023-08-13T13:29:17.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | aeolian83 | null | null | aeolian83/poly-ko-1.3b-translate | 2 | 2 | transformers | 2023-08-12T06:10:33 | ---
license: apache-2.0
language:
- ko
datasets:
- squarelike/sharegpt_deepl_ko_translation
tags:
- pytorch
- causal-lm
---
# poly-ko-1.3b-translate
- EleutherAI/polyglot-ko-1.3b을 squarelike/sharegpt_deepl_ko_translation으로 영한 번역만 가능하도록 fine-tuning한 모델
- QRoLA기법으로 fine-tunnig
### 훈련 정보
- Epoch: 1
- learning-rate: 3e-4
- batch_size: 3
- Lora r: 8
- Lora target modules: query_key_value
3090GPU 1대로 훈련했습니다. | 410 | [
[
-0.0289154052734375,
-0.062103271484375,
0.0205230712890625,
0.0430908203125,
-0.06976318359375,
-0.0097503662109375,
-0.004913330078125,
-0.0004756450653076172,
0.0338134765625,
0.0241546630859375,
-0.044219970703125,
-0.052093505859375,
-0.048309326171875,
... |
rizquuula/mBERT-IndoSQuADv2_1691852742-16-2e-06-0.01-5 | 2023-08-12T22:44:06.000Z | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | rizquuula | null | null | rizquuula/mBERT-IndoSQuADv2_1691852742-16-2e-06-0.01-5 | 0 | 2 | transformers | 2023-08-12T15:08:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mBERT-IndoSQuADv2_1691852742-16-2e-06-0.01-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT-IndoSQuADv2_1691852742-16-2e-06-0.01-5
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2163 | 1.0 | 8145 | 2.0167 |
| 1.7866 | 2.0 | 16290 | 1.9174 |
| 1.6696 | 3.0 | 24435 | 1.8724 |
| 1.6033 | 4.0 | 32580 | 1.8688 |
| 1.5639 | 5.0 | 40725 | 1.8692 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,536 | [
[
-0.03515625,
-0.04327392578125,
0.00785064697265625,
0.022125244140625,
-0.033294677734375,
-0.023895263671875,
-0.0191650390625,
-0.011993408203125,
0.0070648193359375,
0.024261474609375,
-0.05206298828125,
-0.03692626953125,
-0.04681396484375,
-0.018630981... |
rizquuula/mBERT-IndoSQuADv2_1691852823-8-2e-06-0.01-5 | 2023-08-12T23:14:05.000Z | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | rizquuula | null | null | rizquuula/mBERT-IndoSQuADv2_1691852823-8-2e-06-0.01-5 | 0 | 2 | transformers | 2023-08-12T15:09:52 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mBERT-IndoSQuADv2_1691852823-8-2e-06-0.01-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT-IndoSQuADv2_1691852823-8-2e-06-0.01-5
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1134 | 1.0 | 16290 | 1.9777 |
| 1.7153 | 2.0 | 32580 | 1.8824 |
| 1.5898 | 3.0 | 48870 | 1.8629 |
| 1.5116 | 4.0 | 65160 | 1.8628 |
| 1.4696 | 5.0 | 81450 | 1.8685 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,532 | [
[
-0.034820556640625,
-0.04376220703125,
0.00897216796875,
0.02178955078125,
-0.0328369140625,
-0.0240325927734375,
-0.019073486328125,
-0.012115478515625,
0.0081329345703125,
0.0243682861328125,
-0.052154541015625,
-0.037139892578125,
-0.046630859375,
-0.0185... |
ReporterMarina/autotrain-clean-broker-fee-81843141864 | 2023-08-12T16:08:02.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:ReporterMarina/autotrain-data-clean-broker-fee",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | ReporterMarina | null | null | ReporterMarina/autotrain-clean-broker-fee-81843141864 | 0 | 2 | transformers | 2023-08-12T16:07:28 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- ReporterMarina/autotrain-data-clean-broker-fee
co2_eq_emissions:
emissions: 0.1612062835596046
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 81843141864
- CO2 Emissions (in grams): 0.1612
## Validation Metrics
- Loss: 1.043
- Accuracy: 0.500
- Macro F1: 0.222
- Micro F1: 0.500
- Weighted F1: 0.333
- Macro Precision: 0.167
- Micro Precision: 0.500
- Weighted Precision: 0.250
- Macro Recall: 0.333
- Micro Recall: 0.500
- Weighted Recall: 0.500
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ReporterMarina/autotrain-clean-broker-fee-81843141864
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ReporterMarina/autotrain-clean-broker-fee-81843141864", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ReporterMarina/autotrain-clean-broker-fee-81843141864", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,334 | [
[
-0.0306854248046875,
-0.024139404296875,
0.0083770751953125,
0.007843017578125,
-0.0010499954223632812,
0.00449371337890625,
0.0033626556396484375,
-0.0181884765625,
-0.001407623291015625,
0.0094757080078125,
-0.04449462890625,
-0.034271240234375,
-0.05487060546... |
Francesco-A/distilbert-base-uncased-finetuned-imdb-v2 | 2023-08-12T17:21:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | Francesco-A | null | null | Francesco-A/distilbert-base-uncased-finetuned-imdb-v2 | 0 | 2 | transformers | 2023-08-12T16:44:44 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
language:
- en
metrics:
- perplexity
---
# distilbert-base-uncased-finetuned-imdb-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3033
## Model description
This model is a fine-tuned version of DistilBERT base uncased on the IMDb dataset. It was trained to predict the next word in a sentence using masked language modeling. The model has been fine-tuned to adapt to the language patterns and sentiment present in movie reviews.
## Intended uses & limitations
This model is primarily designed for the fill-mask task, a type of language modeling where the model is trained to predict missing words within a given context. It excels at completing sentences or phrases by predicting the most likely missing word based on the surrounding text. This functionality makes it valuable for a wide range of natural language processing tasks, such as generating coherent text, improving auto-completion in writing applications, and enhancing conversational agents' responses. However, it may have limitations in handling domain-specific language or topics not present in the IMDb dataset. Additionally, it may not perform well on languages other than English.
## Training and evaluation data
The model was trained on a subset of the IMDb dataset, containing 40,000 reviews for fine-tuning. The evaluation was conducted on a separate test set of 6,000 reviews.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4912 | 1.0 | 625 | 2.3564 |
| 2.4209 | 2.0 | 1250 | 2.3311 |
| 2.4 | 3.0 | 1875 | 2.3038 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
## How to use
```python
import torch
import pandas as pd
from transformers import AutoTokenizer, AutoModelForMaskedLM
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Francesco-A/distilbert-base-uncased-finetuned-imdb-v2")
model = AutoModelForMaskedLM.from_pretrained("Francesco-A/distilbert-base-uncased-finetuned-imdb-v2")
# Example sentence
sentence = "This movie is really [MASK]."
# Tokenize the sentence
inputs = tokenizer(sentence, return_tensors="pt")
# Get the model's predictions
with torch.no_grad():
outputs = model(**inputs)
# Get the top-k predicted tokens and their probabilities
k = 5 # Number of top predictions to retrieve
masked_token_index = inputs["input_ids"].tolist()[0].index(tokenizer.mask_token_id)
predicted_token_logits = outputs.logits[0, masked_token_index]
topk_values, topk_indices = torch.topk(torch.softmax(predicted_token_logits, dim=-1), k)
# Convert top predicted token indices to words
predicted_tokens = [tokenizer.decode(idx.item()) for idx in topk_indices]
# Convert probabilities to Python floats
probs = topk_values.tolist()
# Create a DataFrame to display the top predicted words and probabilities
data = {
"Predicted Words": predicted_tokens,
"Probability": probs,
}
df = pd.DataFrame(data)
# Display the DataFrame
df
``` | 3,689 | [
[
-0.03216552734375,
-0.05169677734375,
0.00091552734375,
0.0091552734375,
-0.0313720703125,
0.012359619140625,
-0.00377655029296875,
-0.0003170967102050781,
0.0250396728515625,
0.034698486328125,
-0.051177978515625,
-0.035003662109375,
-0.0718994140625,
-0.00... |
Xenova/toxic-bert | 2023-08-12T17:22:07.000Z | [
"transformers.js",
"onnx",
"bert",
"text-classification",
"region:us"
] | text-classification | Xenova | null | null | Xenova/toxic-bert | 0 | 2 | transformers.js | 2023-08-12T17:21:50 | ---
library_name: "transformers.js"
---
https://huggingface.co/unitary/toxic-bert with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 497 | [
[
-0.0255889892578125,
0.0099029541015625,
0.0262908935546875,
0.0504150390625,
-0.00705718994140625,
0.0003237724304199219,
-0.005146026611328125,
-0.0235595703125,
0.019134521484375,
0.0350341796875,
-0.052734375,
-0.035552978515625,
-0.036285400390625,
-0.0... |
zyacub/hugging_face_QA | 2023-08-12T18:18:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | zyacub | null | null | zyacub/hugging_face_QA | 0 | 2 | transformers | 2023-08-12T17:35:12 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: hugging_face_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hugging_face_QA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.0354 |
| 0.3317 | 2.0 | 500 | 2.1035 |
| 0.3317 | 3.0 | 750 | 2.0138 |
| 0.313 | 4.0 | 1000 | 2.1020 |
| 0.313 | 5.0 | 1250 | 2.1866 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,521 | [
[
-0.032470703125,
-0.051055908203125,
0.007213592529296875,
0.029937744140625,
-0.022857666015625,
-0.00311279296875,
0.0026721954345703125,
-0.0135650634765625,
0.0095367431640625,
0.0181121826171875,
-0.07244873046875,
-0.0433349609375,
-0.054412841796875,
... |
Trelis/mpt-7b-instruct-hosted-inference-8bit | 2023-08-14T11:45:18.000Z | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"hosted inference",
"8 bit",
"8bit",
"8-bit",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"text-genera... | text-generation | Trelis | null | null | Trelis/mpt-7b-instruct-hosted-inference-8bit | 0 | 2 | transformers | 2023-08-12T17:45:58 | ---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
- hosted inference
- 8 bit
- 8bit
- 8-bit
inference: true
---
# MPT 7B Instruct - hosted inference
This is simply an 8-bit version of the mpt-7b-instruct model.
- 8-bits allows the model to be below 10 GB
- This allows for hosted inference of the model on the model's home page
- Note that inference may be slow unless you have a HuggingFace Pro plan.
If you want to run inference yourself (e.g. in a Colab notebook) you can try:
```
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
model_id = 'Trelis/mpt-7b-instruct-hosted-inference-8bit'
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline, TextStreamer
config = transformers.AutoConfig.from_pretrained(model_id, trust_remote_code=True)
config.init_device = 'cuda:0' # Unclear whether this really helps a lot or interacts with device_map.
config.max_seq_len = 512
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, config=config)
# MPT Inference
def stream(user_instruction):
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
prompt = PROMPT_FOR_GENERATION_FORMAT.format(instruction=user_instruction)
inputs = tokenizer([prompt], return_tensors="pt").to("cuda:0")
streamer = TextStreamer(tokenizer)
# Despite returning the usual output, the streamer will also print the generated text to stdout.
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=500, eos_token_id=0, temperature=1)
stream('Count to ten')
```
~
Below follows information on the original mpt-7b-instruct model...
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
| 10,034 | [
[
-0.03363037109375,
-0.04718017578125,
0.0153350830078125,
0.020294189453125,
-0.0272674560546875,
-0.00585174560546875,
0.0010204315185546875,
-0.018524169921875,
0.007049560546875,
0.0246124267578125,
-0.04730224609375,
-0.03228759765625,
-0.047088623046875,
... |
Xenova/m2m100_418M | 2023-09-04T14:40:56.000Z | [
"transformers.js",
"onnx",
"m2m_100",
"text2text-generation",
"translation",
"region:us"
] | translation | Xenova | null | null | Xenova/m2m100_418M | 1 | 2 | transformers.js | 2023-08-12T20:41:11 | ---
library_name: transformers.js
pipeline_tag: translation
---
https://huggingface.co/facebook/m2m100_418M with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 523 | [
[
-0.038360595703125,
0.011016845703125,
0.0236358642578125,
0.05657958984375,
-0.0022983551025390625,
-0.004268646240234375,
0.0012178421020507812,
-0.0139312744140625,
0.0273895263671875,
0.038543701171875,
-0.06744384765625,
-0.03240966796875,
-0.04458618164062... |
degor/ppp-Pyramids | 2023-08-13T01:27:21.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | degor | null | null | degor/ppp-Pyramids | 0 | 2 | ml-agents | 2023-08-13T01:26:19 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: degor/ppp-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 1,330 | [
[
-0.04132080078125,
-0.03399658203125,
0.0017004013061523438,
0.01474761962890625,
-0.0101470947265625,
0.01251983642578125,
0.0166778564453125,
-0.01485443115234375,
0.03314208984375,
0.029388427734375,
-0.039703369140625,
-0.049774169921875,
-0.0296173095703125... |
Suchinthana/MT-5-Sinhala-Wikigen | 2023-09-06T17:18:56.000Z | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"si",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Suchinthana | null | null | Suchinthana/MT-5-Sinhala-Wikigen | 0 | 2 | transformers | 2023-08-13T09:08:38 | ---
license: apache-2.0
language:
- si
widget:
- text: 'writeWiki: මානව ආහාර'
- text: 'writeWiki: ගෝලීයකරණය'
- text: 'writeWiki: ජංගම දුරකථනය'
- text: 'writeWiki: ඇස්කිමෝවරු'
- text: 'writeWiki: අනුරාධපුරය'
datasets:
- wikipedia
---
### Fine tuned MT5 base model with Sinhala Wikipedia Dataset
This model is fine tuned with articles from Sinhala Wikipedia for article generation. Used around 10,000 articles for training and fine tuned more than 100 times.
### How to use
We have to use **"writeWiki: "** part at the begining of each prompt.
You can use this model with a pipeline for text generation.
First you might need to install required libraries and import them.
```py
!pip uninstall transformers -y
!pip install transformers
pip install tokenizers sentencepiece
```
Then we might need to restart the runtime either manually or use the below code to end it.
```py
import os
os.kill(os.getpid(), 9)
```
Then we just have to import the tokenizer and run the pipeline:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/mt5-base')
from transformers import pipeline
generator = pipeline(model='Suchinthana/MT5-Sinhala-Wikigen-Experimental', tokenizer=tokenizer)
generator("writeWiki: මානව ආහාර", do_sample=True, max_length=180)
``` | 1,289 | [
[
-0.0288238525390625,
-0.0408935546875,
0.01275634765625,
-0.0015115737915039062,
-0.038177490234375,
-0.015411376953125,
-0.0212860107421875,
0.00009453296661376953,
0.0040283203125,
0.04449462890625,
-0.07049560546875,
-0.0293426513671875,
-0.03167724609375,
... |
manuu01/xtremedistil-l6-h256-uncased-nli | 2023-08-16T10:09:36.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"dataset:scitail",
"dataset:multi_nli",
"dataset:anli",
"dataset:snli",
"dataset:bias-amplified-splits/wanli",
"endpoints_compatible",
"region:us"
] | text-classification | manuu01 | null | null | manuu01/xtremedistil-l6-h256-uncased-nli | 0 | 2 | transformers | 2023-08-13T21:37:40 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xtremedistil-l6-h256-uncased-nli
results: []
datasets:
- scitail
- multi_nli
- anli
- snli
- bias-amplified-splits/wanli
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-nli
The model base is [xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased).
It has been fine-tuned on: [snli](https://huggingface.co/datasets/snli), [wanli](https://huggingface.co/datasets/alisawuffles/WANLI),
[mnli](https://huggingface.co/datasets/multi_nli), [anli](https://huggingface.co/datasets/anli),
[scitail](https://huggingface.co/datasets/scitail)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
It achieved the following accuracy during training (on validation sets):
SNLI: 87.90%
MNLI: 82.27%
ANLI_r3: 44.83%
scitail: 91.02%
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3 | 1,290 | [
[
-0.0172882080078125,
-0.0183563232421875,
0.0199432373046875,
0.00711822509765625,
-0.0207366943359375,
-0.016387939453125,
0.005466461181640625,
-0.032318115234375,
0.0281982421875,
0.027008056640625,
-0.051971435546875,
-0.048583984375,
-0.059967041015625,
... |
Xenova/ClinicalBERT | 2023-09-01T21:39:15.000Z | [
"transformers.js",
"onnx",
"distilbert",
"fill-mask",
"feature-extraction",
"region:us"
] | feature-extraction | Xenova | null | null | Xenova/ClinicalBERT | 0 | 2 | transformers.js | 2023-08-13T22:08:44 | ---
library_name: transformers.js
pipeline_tag: feature-extraction
---
https://huggingface.co/medicalai/ClinicalBERT with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 532 | [
[
-0.0168609619140625,
0.0205078125,
0.036956787109375,
0.04351806640625,
-0.00354766845703125,
-0.0006527900695800781,
-0.0017805099487304688,
-0.0125579833984375,
0.0352783203125,
0.038543701171875,
-0.046051025390625,
-0.054290771484375,
-0.04656982421875,
... |
steve-tong/opus-mt-en-zh-hk | 2023-08-14T02:35:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | steve-tong | null | null | steve-tong/opus-mt-en-zh-hk | 0 | 2 | transformers | 2023-08-14T02:23:13 | ---
license: apache-2.0
base_model: steve-tong/opus-mt-en-zh-tw
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-zh-hk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-hk
This model is a fine-tuned version of [steve-tong/opus-mt-en-zh-tw](https://huggingface.co/steve-tong/opus-mt-en-zh-tw) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7483
- Bleu: 2.0939
- Gen Len: 8.8344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss |
|:-------------:|:-----:|:------:|:------:|:-------:|:---------------:|
| 6.1985 | 1.0 | 3204 | 0.0368 | 15.7821 | 5.5151 |
| 5.1515 | 2.0 | 6408 | 0.0795 | 19.0206 | 4.8442 |
| 4.4578 | 3.0 | 9612 | 0.1236 | 15.8192 | 4.5900 |
| 4.0205 | 4.0 | 12816 | 0.2263 | 11.7562 | 4.3855 |
| 3.6807 | 5.0 | 16020 | 0.3763 | 10.0861 | 4.2938 |
| 3.3622 | 6.0 | 19224 | 0.8981 | 9.1685 | 4.2150 |
| 3.1207 | 7.0 | 22428 | 0.9003 | 8.7014 | 4.3173 |
| 2.8693 | 8.0 | 25632 | 1.2798 | 8.6273 | 4.2797 |
| 2.7172 | 9.0 | 28836 | 1.3176 | 8.4922 | 4.2541 |
| 2.5925 | 10.0 | 32040 | 1.2774 | 8.6812 | 4.2033 |
| 2.4255 | 11.0 | 35244 | 1.3112 | 8.5317 | 4.3955 |
| 2.3242 | 12.0 | 38448 | 1.4831 | 8.7599 | 4.4269 |
| 2.1889 | 13.0 | 41652 | 1.5538 | 8.6474 | 4.3731 |
| 2.0876 | 14.0 | 44856 | 1.45 | 8.5721 | 4.4453 |
| 2.0078 | 15.0 | 48060 | 1.4117 | 8.6339 | 4.5300 |
| 1.9271 | 16.0 | 51264 | 1.546 | 8.7039 | 4.5676 |
| 1.8291 | 17.0 | 54468 | 1.406 | 8.6009 | 4.6800 |
| 1.7886 | 18.0 | 57672 | 1.2522 | 8.549 | 4.6512 |
| 1.6894 | 19.0 | 60876 | 1.6497 | 8.6231 | 4.8486 |
| 1.6176 | 20.0 | 64080 | 1.5496 | 8.6013 | 4.7852 |
| 1.5721 | 21.0 | 67284 | 1.5994 | 8.7434 | 4.8427 |
| 1.5352 | 22.0 | 70488 | 1.4812 | 8.6895 | 4.8117 |
| 1.4536 | 23.0 | 73692 | 1.527 | 8.7088 | 4.9496 |
| 1.3996 | 24.0 | 76896 | 1.596 | 8.7047 | 5.0385 |
| 1.3619 | 25.0 | 80100 | 1.4476 | 8.9811 | 5.0234 |
| 1.3395 | 26.0 | 83304 | 1.4646 | 8.7657 | 5.0767 |
| 1.2822 | 27.0 | 86508 | 1.3204 | 8.8608 | 5.1034 |
| 1.254 | 28.0 | 89712 | 1.8617 | 8.9263 | 5.1776 |
| 1.1714 | 29.0 | 92916 | 1.3892 | 8.7879 | 5.1935 |
| 1.1895 | 30.0 | 96120 | 1.4488 | 8.7516 | 5.2259 |
| 1.1355 | 31.0 | 99324 | 1.4837 | 8.6726 | 5.3575 |
| 1.114 | 32.0 | 102528 | 1.4092 | 8.6701 | 5.3746 |
| 1.0678 | 33.0 | 105732 | 1.6906 | 8.79 | 5.3924 |
| 1.0689 | 34.0 | 108936 | 1.7832 | 8.8237 | 5.4634 |
| 1.0323 | 35.0 | 112140 | 2.0318 | 8.8081 | 5.4653 |
| 0.9952 | 36.0 | 115344 | 1.9861 | 8.832 | 5.5036 |
| 0.9845 | 37.0 | 118548 | 1.6519 | 8.7566 | 5.5411 |
| 0.9545 | 38.0 | 121752 | 1.6037 | 8.8245 | 5.5439 |
| 0.9143 | 39.0 | 124956 | 2.0811 | 8.8068 | 5.6464 |
| 0.9264 | 40.0 | 128160 | 1.7974 | 9.0354 | 5.6386 |
| 0.8856 | 41.0 | 131364 | 2.0425 | 8.8093 | 5.6490 |
| 0.8818 | 42.0 | 134568 | 2.1628 | 8.7829 | 5.6748 |
| 0.8592 | 43.0 | 137772 | 2.0719 | 8.825 | 5.6744 |
| 0.8536 | 44.0 | 140976 | 1.6899 | 8.8377 | 5.6870 |
| 0.8428 | 45.0 | 144180 | 2.128 | 8.8241 | 5.7233 |
| 0.8315 | 46.0 | 147384 | 2.0585 | 8.8151 | 5.7139 |
| 0.8185 | 47.0 | 150588 | 2.0572 | 8.8299 | 5.7853 |
| 0.8142 | 48.0 | 153792 | 2.0756 | 8.8427 | 5.7462 |
| 0.7832 | 49.0 | 156996 | 2.1042 | 8.8381 | 5.7406 |
| 0.7934 | 50.0 | 160200 | 5.7483 | 2.0939 | 8.8344 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
| 4,953 | [
[
-0.05145263671875,
-0.031494140625,
0.0197906494140625,
0.00814056396484375,
-0.006443023681640625,
-0.006618499755859375,
0.004428863525390625,
-0.0031681060791015625,
0.049407958984375,
0.028411865234375,
-0.042694091796875,
-0.0452880859375,
-0.04364013671875... |
potatomine/keras-dummy-sequential-demo-test | 2023-08-14T04:52:51.000Z | [
"keras",
"region:us"
] | null | potatomine | null | null | potatomine/keras-dummy-sequential-demo-test | 0 | 2 | keras | 2023-08-14T04:46:38 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 739 | [
[
-0.03778076171875,
-0.04107666015625,
0.028533935546875,
0.00582122802734375,
-0.03399658203125,
-0.016204833984375,
0.0013608932495117188,
-0.00040984153747558594,
0.0233917236328125,
0.021697998046875,
-0.04510498046875,
-0.04913330078125,
-0.034637451171875,
... |
dantepalacio/ruLongT5-Large | 2023-08-16T10:13:07.000Z | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | dantepalacio | null | null | dantepalacio/ruLongT5-Large | 0 | 2 | transformers | 2023-08-14T08:56:57 | ---
language:
- ru
---
original model: agemagician/mlong-t5-tglobal-large
adaptation guide: https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
import torch
from transformers import MT5Tokenizer, LongT5ForConditionalGeneration
model_name = "dantepalacio/ruLongT5-Large"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = LongT5ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True)
| 475 | [
[
-0.0101165771484375,
-0.006420135498046875,
0.01873779296875,
0.030029296875,
-0.027069091796875,
-0.0113372802734375,
-0.0305328369140625,
-0.024658203125,
-0.00614166259765625,
0.041839599609375,
-0.06365966796875,
-0.035552978515625,
-0.037139892578125,
0... |
ajibawa-2023/carl-llama-2-13b | 2023-08-16T20:16:44.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:jerryjalapeno/nart-100k-synthetic",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | ajibawa-2023 | null | null | ajibawa-2023/carl-llama-2-13b | 10 | 2 | transformers | 2023-08-14T10:07:25 | ---
license: cc-by-nc-nd-4.0
datasets:
- jerryjalapeno/nart-100k-synthetic
language:
- en
---
**Carl: A Therapist AI**
Early prevention can help lot of people to avoid depression and other mental illnesses. Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain.
Many people don't have access the therapist, due to a financial, personal, social or other restriction.
Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10~15 conversations between Carl and client.
Base data was obtained from jerryjalapeno/nart-100k-synthetic . This data was further refined and fine tuned. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM.
This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 50 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
GGML Quant models are converted by Kijana Mitchell. Extremely thankful to him.
**GPTQ**
GPTQ: [TheBloke](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GPTQ)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making this model available.
**Example Prompt:**
```
This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down
Context
You are Carl, A Therapist AI
USER: <prompt>
CARL:
```
Note:
This is just a research experiment, and the model should NOT be used as a human therapist. Use "cat" command to join all pytorch_model.bin parts. | 1,920 | [
[
-0.0247955322265625,
-0.071044921875,
0.039947509765625,
0.033416748046875,
-0.0212554931640625,
-0.00247955322265625,
-0.0018186569213867188,
-0.053070068359375,
0.004085540771484375,
0.01434326171875,
-0.03057861328125,
-0.0243682861328125,
-0.051361083984375,... |
TheBloke/llama2-22B-daydreamer-v2-GGML | 2023-09-27T13:01:26.000Z | [
"transformers",
"llama",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/llama2-22B-daydreamer-v2-GGML | 4 | 2 | transformers | 2023-08-14T12:10:07 | ---
license: llama2
model_name: Llama2 22B Daydreamer v2
inference: false
model_creator: Nick Perez
model_link: https://huggingface.co/nkpz/llama2-22b-daydreamer-v2
model_type: llama
quantized_by: TheBloke
base_model: nkpz/llama2-22b-daydreamer-v2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 22B Daydreamer v2 - GGML
- Model creator: [Nick Perez](https://huggingface.co/nkpz)
- Original model: [Llama2 22B Daydreamer v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
## Description
This repo contains GGML format model files for [Nick Perez's Llama2 22B Daydreamer v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML)
* [Nick Perez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
## Prompt template: Custom
Q&A Example
```
Question: {prompt}
Answer:
```
An example of how it handles different roles, which I still like to use explicit instructions for:
```
### Instruction
Complete the story in a manner that accurately reflects the scenario summary.
### Scenario:
A hot dog salesman at a baseball game is annoyed and behaving rudely because I don't want to buy a hot dog.
### Begin Chat
Hot Dog Salesman:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-22b-daydreamer-v2.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q2_K.bin) | q2_K | 2 | 9.22 GB| 11.72 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama2-22b-daydreamer-v2.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 9.46 GB| 11.96 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama2-22b-daydreamer-v2.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 10.57 GB| 13.07 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-daydreamer-v2.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 11.61 GB| 14.11 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-daydreamer-v2.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q4_0.bin) | q4_0 | 4 | 12.34 GB| 14.84 GB | Original quant method, 4-bit. |
| [llama2-22b-daydreamer-v2.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 12.34 GB| 14.84 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama2-22b-daydreamer-v2.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 13.18 GB| 15.68 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama2-22b-daydreamer-v2.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q4_1.bin) | q4_1 | 4 | 13.69 GB| 16.19 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama2-22b-daydreamer-v2.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q5_0.bin) | q5_0 | 5 | 15.04 GB| 17.54 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama2-22b-daydreamer-v2.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 15.04 GB| 17.54 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama2-22b-daydreamer-v2.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 15.47 GB| 17.97 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llama2-22b-daydreamer-v2.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q5_1.bin) | q5_1 | 5 | 16.39 GB| 18.89 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llama2-22b-daydreamer-v2.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q6_K.bin) | q6_K | 6 | 17.91 GB| 20.41 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llama2-22b-daydreamer-v2.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML/blob/main/llama2-22b-daydreamer-v2.ggmlv3.q8_0.bin) | q8_0 | 8 | 23.14 GB| 25.64 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llama2-22b-daydreamer-v2.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Q&A Example\n\n```\nQuestion: Write a story about llamas\nAnswer:\n```\n\n\nAn example of how it handles different roles, which I still like to use explicit instructions for:\n\n```\n### Instruction\nComplete the story in a manner that accurately reflects the scenario summary.\n\n### Scenario: \nA hot dog salesman at a baseball game is annoyed and behaving rudely because I don't want to buy a hot dog.\n\n### Begin Chat\nHot Dog Salesman:\n```"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Nick Perez's Llama2 22B Daydreamer v2
Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v1
Trained it a couple more epochs on airoboros v2 general data.
This resulted in improvements to its obsession with overexplaining itself - it's a little less instruction focused and feels more versatile. Still needs work overall, but I think it's moving in the right direction.
Still more to come. Going to try to find a way to improve its problem solving skills without ruining the output style.
| 15,485 | [
[
-0.034088134765625,
-0.058837890625,
0.0308074951171875,
0.0231170654296875,
-0.0283355712890625,
-0.00896453857421875,
0.00263214111328125,
-0.05010986328125,
0.0269622802734375,
0.00949859619140625,
-0.051910400390625,
-0.04345703125,
-0.03729248046875,
-0... |
TheBloke/PULI-GPT-3SX-GGML | 2023-09-27T13:01:26.000Z | [
"transformers",
"gpt-neox",
"text-generation",
"puli",
"hu",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/PULI-GPT-3SX-GGML | 4 | 2 | transformers | 2023-08-14T14:56:57 | ---
language:
- hu
license: cc-by-nc-4.0
tags:
- text-generation
- puli
model_name: PULI GPT 3SX
inference: false
model_creator: NYTK
model_link: https://huggingface.co/NYTK/PULI-GPT-3SX
model_type: gpt-neox
quantized_by: TheBloke
widget:
- text: Elmesélek egy történetet a nyelvtechnológiáról.
base_model: NYTK/PULI-GPT-3SX
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# PULI GPT 3SX - GGML
- Model creator: [NYTK](https://huggingface.co/NYTK)
- Original model: [PULI GPT 3SX](https://huggingface.co/NYTK/PULI-GPT-3SX)
## Description
This repo contains GPT-NeoX GGML format model files for [NYTK's PULI GPT 3SX](https://huggingface.co/NYTK/PULI-GPT-3SX).
Please note that these GGMLs are **not compatible with llama.cpp, text-generation-webui or llama-cpp-python**. Please see below for a list of tools that work with this GGML model.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PULI-GPT-3SX-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML)
* [NYTK's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NYTK/PULI-GPT-3SX)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp, text-generation-webui or llama-cpp-python.
They can be used with:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful inference engine based on llama.cpp with full GPU acceleration and good UI.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI for GGML inference on Windows and macOS.
* [LoLLMs-WebUI](https://github.com/ParisNeo/LoLLMs-WebUI) a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model.
* [ctransformers](https://github.com/marella/ctransformers): for use in Python code, including LangChain support.
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `gpt-neox` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMs-WebUI:
* [Video tutorial, by LoLLMs-WebUI's author **ParisNeo**](https://youtu.be/vBU1b5n0GMU)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [puli-gpt-3sx.ggmlv1.q4_0.bin](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML/blob/main/puli-gpt-3sx.ggmlv1.q4_0.bin) | q4_0 | 4 | 3.86 GB| 6.36 GB | 4-bit. |
| [puli-gpt-3sx.ggmlv1.q4_1.bin](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML/blob/main/puli-gpt-3sx.ggmlv1.q4_1.bin) | q4_1 | 4 | 4.29 GB| 6.79 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [puli-gpt-3sx.ggmlv1.q5_0.bin](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML/blob/main/puli-gpt-3sx.ggmlv1.q5_0.bin) | q5_0 | 5 | 4.72 GB| 7.22 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [puli-gpt-3sx.ggmlv1.q5_1.bin](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML/blob/main/puli-gpt-3sx.ggmlv1.q5_1.bin) | q5_1 | 5 | 5.15 GB| 7.65 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
| [puli-gpt-3sx.ggmlv1.q8_0.bin](https://huggingface.co/TheBloke/PULI-GPT-3SX-GGML/blob/main/puli-gpt-3sx.ggmlv1.q8_0.bin) | q8_0 | 8 | 7.29 GB| 9.79 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: NYTK's PULI GPT 3SX
# PULI GPT-3SX (6.7 billion parameter)
For further details, see [our demo site](https://juniper.nytud.hu/demo/puli).
- Hungarian GPT-NeoX model (6.7 billion parameter)
- Trained with EleutherAI's GPT-NeoX [github](https://github.com/EleutherAI/gpt-neox)
- Dataset: 36.3 billion words
- Checkpoint: 150 000 steps
## Limitations
- max_seq_length = 2048
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-puli,
title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Hungary},
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
pages = {247--262}
}
```
## Usage
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
## Usage with pipeline
```python
from transformers import pipeline, GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(generator(prompt)[0]["generated_text"])
```
| 8,980 | [
[
-0.039154052734375,
-0.055877685546875,
0.03167724609375,
0.01032257080078125,
-0.018157958984375,
-0.02032470703125,
-0.0002238750457763672,
-0.04339599609375,
0.0192413330078125,
0.01360321044921875,
-0.0380859375,
-0.033294677734375,
-0.0258636474609375,
... |
Chars/DeepDanbooruClip | 2023-08-14T19:22:23.000Z | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | Chars | null | null | Chars/DeepDanbooruClip | 0 | 2 | transformers | 2023-08-14T18:49:21 | ---
tags:
- vision
widget:
- src: https://huggingface.co/Chars/DeepDanbooruClip/resolve/main/example.jpg
candidate_labels: Azur Lane, 3 girl with sword, cat ear, a dog
example_title: Azur Lane
- src: https://huggingface.co/Chars/DeepDanbooruClip/resolve/main/example2.jpg
candidate_labels: 1 girl with black hair, rabbit ear, big breasts, minato aqua, fate/extra, k-on!, daiyousei, cirno
example_title: cirno & daiyousei
---
Frok from https://huggingface.co/OysterQAQ/DanbooruCLIP
| 495 | [
[
-0.0445556640625,
-0.0545654296875,
0.0157470703125,
0.0343017578125,
-0.033355712890625,
-0.004886627197265625,
0.01082611083984375,
-0.041412353515625,
0.07586669921875,
0.030609130859375,
-0.061676025390625,
-0.0352783203125,
-0.02532958984375,
0.02085876... |
allenai/specter2_aug2023refresh_regression | 2023-08-15T21:45:11.000Z | [
"adapter-transformers",
"bert",
"dataset:allenai/scirepeval",
"region:us"
] | null | allenai | null | null | allenai/specter2_aug2023refresh_regression | 0 | 2 | adapter-transformers | 2023-08-14T19:28:16 | ---
tags:
- bert
- adapter-transformers
datasets:
- allenai/scirepeval
---
# Adapter `allenai/specter2_aug2023refresh_regression` for `allenai/specter2_aug2023refresh_base`
An [adapter](https://adapterhub.ml) for the `None` model that was trained on the [allenai/scirepeval](https://huggingface.co/datasets/allenai/scirepeval/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/specter2_aug2023refresh_base")
adapter_name = model.load_adapter("allenai/specter2_aug2023refresh_regression", source="hf", set_active=True)
```
**\*\*\*\*\*\*Update\*\*\*\*\*\***
This update introduces a new set of SPECTER 2.0 models with the base transformer encoder pre-trained on an extended citation dataset containing more recent papers.
For benchmarking purposes please use the existing SPECTER 2.0 [models](https://huggingface.co/allenai/specter2) w/o the **aug2023refresh** suffix.
# SPECTER 2.0 (Base)
SPECTER 2.0 is the successor to [SPECTER](https://huggingface.co/allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/specter-2_).
This is the base model to be used along with the adapters.
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
**Note:For general embedding purposes, please use [allenai/specter2](https://huggingface.co/allenai/specter2).**
**To get the best performance on a downstream task type please load the associated adapter with the base model as in the example below.**
# Model Details
## Model Description
SPECTER 2.0 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation).
Post that it is trained with additionally attached task format specific adapter modules on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks.
Task Formats trained on:
- Classification
- Regression
- Proximity
- Adhoc Search
It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well.
- **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
- **Shared by :** Allen AI
- **Model type:** bert-base-uncased + adapters
- **License:** Apache 2.0
- **Finetuned from model:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased).
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/allenai/SPECTER2_0](https://github.com/allenai/SPECTER2_0)
- **Paper:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137)
- **Demo:** [Usage](https://github.com/allenai/SPECTER2_0/blob/main/README.md)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
|Model|Name and HF link|Description|
|--|--|--|
|Proximity*|[allenai/specter2_aug2023refresh_proximity](https://huggingface.co/allenai/specter2_aug2023refresh)|Encode papers as queries and candidates eg. Link Prediction, Nearest Neighbor Search|
|Adhoc Query|[allenai/specter2_aug2023refresh_adhoc_query](https://huggingface.co/allenai/specter2_aug2023refresh_adhoc_query)|Encode short raw text queries for search tasks. (Candidate papers can be encoded with the proximity adapter)|
|Classification|[allenai/specter2_aug2023refresh_classification](https://huggingface.co/allenai/specter2_aug2023refresh_classification)|Encode papers to feed into linear classifiers as features|
|Regression|[allenai/specter2_aug2023refresh_regression](https://huggingface.co/allenai/specter2_aug2023refresh_regression)|Encode papers to feed into linear regressors as features|
*Retrieval model should suffice for downstream task types not mentioned above
```python
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('allenai/specter2_aug2023refresh_base')
#load base model
model = AutoModel.from_pretrained('allenai/specter2_aug2023refresh_base')
#load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2_aug2023refresh_regression", source="hf", load_as="specter2_regression", set_active=True)
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
# concatenate title and abstract
text_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
# preprocess the input
inputs = self.tokenizer(text_batch, padding=True, truncation=True,
return_tensors="pt", return_token_type_ids=False, max_length=512)
output = model(**inputs)
# take the first token in the batch as the embedding
embeddings = output.last_hidden_state[:, 0, :]
```
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md).
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats.
All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval).
The citation link are triplets in the form
```json
{"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}}
```
consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation.
## Training Procedure
Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677).
### Training Hyperparameters
The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md):
- Base Model: First a base model is trained on the above citation triplets.
``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16```
- Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well.
``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16```
# Evaluation
We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset.
We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark.
|Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)|
|--|--|--|--|--|
|[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)|
|[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)|
|[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)|
|[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)|
|[SPECTER 2.0-Adapters](https://huggingface.co/models?search=allenai/specter-2)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**|
Please cite the following works if you end up using SPECTER 2.0:
[SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677):
```bibtex
@inproceedings{specter2020cohan,
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
booktitle={ACL},
year={2020}
}
```
[SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137)
```bibtex
@article{Singh2022SciRepEvalAM,
title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations},
author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman},
journal={ArXiv},
year={2022},
volume={abs/2211.13308}
}
```
| 9,491 | [
[
-0.013824462890625,
-0.0232696533203125,
0.032470703125,
0.01113128662109375,
-0.007106781005859375,
-0.0155792236328125,
-0.0179901123046875,
-0.04815673828125,
0.030731201171875,
0.00489044189453125,
-0.022430419921875,
-0.0186767578125,
-0.05194091796875,
... |
minchiosa/vit-base-patch16-224-finetuned-flower | 2023-09-14T04:57:01.000Z | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | image-classification | minchiosa | null | null | minchiosa/vit-base-patch16-224-finetuned-flower | 0 | 2 | transformers | 2023-08-14T21:39:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
base_model: google/vit-base-patch16-224
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
| 1,159 | [
[
-0.0307769775390625,
-0.06048583984375,
0.007091522216796875,
0.0204315185546875,
-0.030059814453125,
-0.038909912109375,
-0.01384735107421875,
-0.0225677490234375,
0.00707244873046875,
0.0233001708984375,
-0.057830810546875,
-0.036407470703125,
-0.0434265136718... |
Mimi-2023/roberta-finetuned-subjqa-hotels | 2023-08-15T07:17:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | Mimi-2023 | null | null | Mimi-2023/roberta-finetuned-subjqa-hotels | 0 | 2 | transformers | 2023-08-15T05:01:52 | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-hotels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-hotels
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,114 | [
[
-0.031707763671875,
-0.047576904296875,
0.0101470947265625,
0.003017425537109375,
-0.0271453857421875,
-0.0233612060546875,
-0.019134521484375,
-0.0157623291015625,
0.004764556884765625,
0.036895751953125,
-0.06689453125,
-0.0450439453125,
-0.047119140625,
-... |
davesoma/SageBeluga13 | 2023-08-22T12:59:24.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | davesoma | null | null | davesoma/SageBeluga13 | 0 | 2 | transformers | 2023-08-15T13:19:04 | <strong style="font-size: 24px;">"My name is Epicurus, but my friends call me "Epic" for short. "<strong style="font-size: 24px;"></strong>.
<strong style="font-size: 24px;">SageBeluga13B</strong> Stoic assistant fine-tuned by <strong style="font-size: 24px;">dscompounding.com</strong>.
<img src="https://cdn-uploads.huggingface.co/production/uploads/645ba35bbc7518912e2135e6/iAd3EFZptpoE8QzZKnaxT.png" alt="Dave86CH_epic_badass_marcus_aurelius_fighting_0c2c720e-bcff-471e-9a05-89aecb45722a.png" width="500">
Marcus Aurelius
# SageBeluga13 Model README
## Description
The `SageBeluga13` model, hosted on Hugging Face, has been fine-tuned for specific tasks.
To utilize this model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "davesoma/SageBeluga13"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.float32
)
sequences = pipeline(
"Girafatron is obsessed with giraffes...",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Example
![SageBeluga13B.jpg]()
<img src="https://cdn-uploads.huggingface.co/production/uploads/645ba35bbc7518912e2135e6/UZLw9vkVCc2nQ56jxVZ4y.jpeg" alt="SageBeluga13.png" width="800">
## Past experiments
https://dscompounding.com/2023/03/31/chapter-iii-digital-marcus-aurelius/
| 1,559 | [
[
-0.041839599609375,
-0.06341552734375,
0.019134521484375,
0.0204315185546875,
-0.02899169921875,
0.001079559326171875,
-0.011566162109375,
-0.0294952392578125,
0.05377197265625,
0.01142120361328125,
-0.04119873046875,
-0.02313232421875,
-0.0689697265625,
0.0... |
TheBloke/Llama2-22B-Daydreamer-v3-GGML | 2023-09-27T13:01:29.000Z | [
"transformers",
"llama",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Llama2-22B-Daydreamer-v3-GGML | 6 | 2 | transformers | 2023-08-15T20:03:20 | ---
license: llama2
model_name: Llama2 22B Daydreamer2 v3
inference: false
model_creator: Nick Perez
model_link: https://huggingface.co/nkpz/llama2-22b-daydreamer-v3
model_type: llama
quantized_by: TheBloke
base_model: nkpz/llama2-22b-daydreamer-v3
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 22B Daydreamer2 v3 - GGML
- Model creator: [Nick Perez](https://huggingface.co/nkpz)
- Original model: [Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3)
## Description
This repo contains GGML format model files for [Nick Perez's Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML)
* [Nick Perez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-22b-daydreamer-v3.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q2_K.bin) | q2_K | 2 | 9.22 GB| 11.72 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama2-22b-daydreamer-v3.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 9.46 GB| 11.96 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama2-22b-daydreamer-v3.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 10.57 GB| 13.07 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-daydreamer-v3.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 11.61 GB| 14.11 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-daydreamer-v3.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q4_0.bin) | q4_0 | 4 | 12.34 GB| 14.84 GB | Original quant method, 4-bit. |
| [llama2-22b-daydreamer-v3.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 12.34 GB| 14.84 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama2-22b-daydreamer-v3.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 13.18 GB| 15.68 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama2-22b-daydreamer-v3.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q4_1.bin) | q4_1 | 4 | 13.69 GB| 16.19 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama2-22b-daydreamer-v3.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q5_0.bin) | q5_0 | 5 | 15.04 GB| 17.54 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama2-22b-daydreamer-v3.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 15.04 GB| 17.54 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama2-22b-daydreamer-v3.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 15.47 GB| 17.97 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llama2-22b-daydreamer-v3.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q5_1.bin) | q5_1 | 5 | 16.39 GB| 18.89 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llama2-22b-daydreamer-v3.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q6_K.bin) | q6_K | 6 | 17.91 GB| 20.41 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llama2-22b-daydreamer-v3.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGML/blob/main/llama2-22b-daydreamer-v3.ggmlv3.q8_0.bin) | q8_0 | 8 | 23.14 GB| 25.64 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llama2-22b-daydreamer-v3.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Nick Perez's Llama2 22B Daydreamer2 v3
**What is it?**
Llama 2 13b expanded to the size of a Llama 1 33b model in certain areas, with the empty surrounding space filled with llama 33b data. (Base Model: https://huggingface.co/chargoddard/llama2-22b-blocktriangular) This is then finetuned on a 3090 by creating large loras and merging them. When I first started with 22b models, I looked for signs of knowledge transfer but didn't see it, so that's not a goal - the goal is just to throw lots of data at it until it adapts well to its surgically implanted parts. Datasets used are a mix of instruction, roleplay, and conversational data, often curated.
Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v2
Gave it an epoch of https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1-mpt which it had never seen before - it had only seen Airoboros 2.0.
I excluded all coding rows from this dataset - a dedicated model is better for that imo. I wanted it to focus on answering well in English.
I thought this was pretty good improvement. Its ability to roleplay and imitate speech styles was enhanced and I feel like it's still sharpening up its problem solving skills.
Its output style is right where I want it - dry and accurate enough when it's being asked for information, but has a lot character when it's answering fun prompts.
For input, it's flexible. It understands the Alpaca format well, but has been trained to handle a variety of input styles at this point.
I don't know what's next. I think I need to start benchmarking so I can more formally identify its weak points.
Examples edited with additional newlines to make it easier to read here.
**Example 1: Q&A**
```
Question: What is happening to the newspaper industry? Respond in a pirate accent.
Answer:
The newspaper industry is being plundered by scurvy dogs and buccaneers like ye!
They're pillagin' yer booty with their fancy digital devices, like smartphones and tablets,
which make it easier for them to read online content instead of buyin' printed copies from us.
And don't forget about those pesky social media sites that are spreading rumors faster than the
winds on a galleon! So it's all hands on deck as we fight back against these scoundrels, or else
we'll be walkin' the plank soon enough! Avast ye landlubbers!
```
**Example 2: Alpaca Instruction**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Angrily explain why pizza shouldn't be made with bad tomatoes.
### Response:
Listen, I know you think it's funny to make pizza with those rotten tomatoes, but let me tell you something -
bad tomatoes are not only disgusting and unhealthy, they also ruin the flavor of your pizza! You want a
delicious pie with fresh, juicy tomatoes that burst in your mouth, right?
Well, then, stop using those nasty tomatoes and stick to good quality ingredients for a truly satisfying pizza experience!
Trust me, once you try it, you won't go back! So please, don't even bother wasting your time making pizza with bad tomatoes
anymore; it's just not worth it. It makes me angry just thinking about it!
```
**Example 3: Chat / Roleplay**
```
You are speaking to Andre The Giant.
You: Hey, you're that wrestler guy, right?
Andre The Giant: Yes, I am indeed Andre the Giant. How can I help you?
You: So, how tall are you really? Like, what's your height in feet and inches?
Andre The Giant: Well, let's see... I stand at 7 ft 4 in (229 cm) tall.
You: Wow! That's so tall! And how much do you weigh?
Andre The Giant: I weigh around 500 lbs (227 kg). It's quite heavy, isn't it?
You: Yeah, I guess so. You must be strong too, considering your size.
Andre The Giant: Thank you for saying that. Strength is definitely important when dealing with my profession.
You: What made you decide to become a professional wrestler?
Andre The Giant: Well, I grew up watching wrestling on television, and I always admired the athleticism
and showmanship involved. When I got older, I decided to pursue it myself.
You: That's cool. Do you have any advice for aspiring wrestlers out there?
Andre The Giant: Absolutely! Always stay dedicated and focused, train hard, and never give up on yourself
or your dreams. Remember, anything is possible if you believe in yourself.
```
| 18,788 | [
[
-0.034088134765625,
-0.058685302734375,
0.0305938720703125,
0.0237884521484375,
-0.0266265869140625,
-0.00952911376953125,
0.0026035308837890625,
-0.051422119140625,
0.0294342041015625,
0.009246826171875,
-0.052459716796875,
-0.043914794921875,
-0.03912353515625... |
stanfordnlp/stanza-gv | 2023-10-02T23:37:08.000Z | [
"stanza",
"token-classification",
"gv",
"license:apache-2.0",
"region:us"
] | token-classification | stanfordnlp | null | null | stanfordnlp/stanza-gv | 0 | 2 | stanza | 2023-08-16T00:30:38 | ---
tags:
- stanza
- token-classification
library_name: stanza
language: gv
license: apache-2.0
---
# Stanza model for Manx (gv)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-10-02 23:37:02.801
| 677 | [
[
-0.0283050537109375,
-0.037078857421875,
0.026702880859375,
0.0240936279296875,
-0.025177001953125,
-0.0030422210693359375,
0.0014123916625976562,
-0.0325927734375,
0.01250457763671875,
0.037750244140625,
-0.05084228515625,
-0.04510498046875,
-0.02398681640625,
... |
stanfordnlp/stanza-ky | 2023-10-02T23:41:16.000Z | [
"stanza",
"token-classification",
"ky",
"license:apache-2.0",
"region:us"
] | token-classification | stanfordnlp | null | null | stanfordnlp/stanza-ky | 1 | 2 | stanza | 2023-08-16T00:31:02 | ---
tags:
- stanza
- token-classification
library_name: stanza
language: ky
license: apache-2.0
---
# Stanza model for Kyrgyz (ky)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-10-02 23:41:07.758
| 679 | [
[
-0.0260009765625,
-0.056732177734375,
0.0169677734375,
0.025482177734375,
-0.031097412109375,
-0.00797271728515625,
-0.0230865478515625,
-0.037200927734375,
0.0100555419921875,
0.04449462890625,
-0.047882080078125,
-0.0552978515625,
-0.0217132568359375,
-0.0... |
Siddhanta19/sd-checkpoint-backup | 2023-09-20T05:44:18.000Z | [
"diffusers",
"en",
"license:gpl-3.0",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | Siddhanta19 | null | null | Siddhanta19/sd-checkpoint-backup | 1 | 2 | diffusers | 2023-08-16T06:20:26 | ---
license: gpl-3.0
language:
- en
library_name: diffusers
duplicated_from: shiowo/backups
---
credit to shiowo/backups. This is just a fork of the repo
my backups
if you want me to remove a model you can leave a comment in community section
feel free to download contains any v4 and 4.5 and more soon :D
shiowo/backups/content/andite for any v4 Sorry if its confusing
you can run sd by downloading [my automatic111 here](https://huggingface.co/shiowo/backups/blob/main/fastsdwebui.zip)
to do: change xformers in requrement.txt to a more recent version and move some pre downloaded lora from control net to the lora folder add canny and negative emmdedings
the sd zip contains waifu diffusion, stability ai, nyanmix vae for models any v5, reliberate, nyanmix it also contains openpose, lineart, line art anime, and shuffle.
this also have add detail lora and 4x sharp upscaler.
v4 to do:
add more quality of life extentions fix some bugs and upgrade pytorch lighing version | 982 | [
[
-0.02606201171875,
-0.0298919677734375,
0.0292510986328125,
0.02130126953125,
-0.03424072265625,
-0.01204681396484375,
0.033203125,
-0.0372314453125,
0.04876708984375,
0.07427978515625,
-0.06512451171875,
-0.021392822265625,
-0.0250244140625,
-0.000240325927... |
fengtc/opus-mt-en-zh | 2023-08-16T06:45:39.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | fengtc | null | null | fengtc/opus-mt-en-zh | 0 | 2 | transformers | 2023-08-16T06:42:30 | ---
language:
- en
- zh
tags:
- translation
license: apache-2.0
---
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| 2,670 | [
[
-0.028961181640625,
-0.03643798828125,
0.0242919921875,
0.0296173095703125,
-0.040252685546875,
-0.0180511474609375,
-0.0227508544921875,
-0.0307464599609375,
0.0204315185546875,
0.0214080810546875,
-0.047027587890625,
-0.060272216796875,
-0.0307464599609375,
... |
TheBloke/Scarlett-7B-GGML | 2023-09-27T13:01:30.000Z | [
"transformers",
"llama",
"en",
"license:other",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Scarlett-7B-GGML | 4 | 2 | transformers | 2023-08-16T07:59:44 | ---
language:
- en
license: other
model_name: Scarlett 7B
inference: false
model_creator: Feynman Innovations
model_link: https://huggingface.co/ajibawa-2023/scarlett-7b
model_type: llama
quantized_by: TheBloke
base_model: ajibawa-2023/scarlett-7b
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Scarlett 7B - GGML
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Description
This repo contains GGML format model files for [Feynman Innovations's Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Scarlett-7B-GGML)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Prompt template: Scarlett
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp as of June 6th, commit `2d43387`.
They should also be compatible with all UIs, libraries and utilities which use GGML.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [scarlett-7b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [scarlett-7b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [scarlett-7b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [scarlett-7b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [scarlett-7b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.83 GB| 6.33 GB | Original quant method, 4-bit. |
| [scarlett-7b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.24 GB| 6.74 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [scarlett-7b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [scarlett-7b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [scarlett-7b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.65 GB| 7.15 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [scarlett-7b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [scarlett-7b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [scarlett-7b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [scarlett-7b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [scarlett-7b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Scarlett-7B-GGML/blob/main/scarlett-7b.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.13 GB| 9.63 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m scarlett-7b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Feynman Innovations's Scarlett 7B
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-7b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts.
| 13,449 | [
[
-0.044097900390625,
-0.059326171875,
0.0235748291015625,
0.021331787109375,
-0.01033782958984375,
-0.009735107421875,
0.0015649795532226562,
-0.04180908203125,
0.032470703125,
0.01085662841796875,
-0.0560302734375,
-0.034393310546875,
-0.044097900390625,
-0.... |
TheBloke/Scarlett-7B-GPTQ | 2023-09-27T13:01:31.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Scarlett-7B-GPTQ | 1 | 2 | transformers | 2023-08-16T07:59:44 | ---
language:
- en
license: other
model_name: Scarlett 7B
inference: false
model_creator: Feynman Innovations
model_link: https://huggingface.co/ajibawa-2023/scarlett-7b
model_type: llama
quantized_by: TheBloke
base_model: ajibawa-2023/scarlett-7b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Scarlett 7B - GPTQ
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Description
This repo contains GPTQ model files for [Feynman Innovations's Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Scarlett-7B-GGML)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Prompt template: Scarlett
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
```
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All GPTQ files are made with AutoGPTQ.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have issues with models that use Act Order plus Group Size.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Scarlett-7B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Scarlett-7B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Scarlett-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Scarlett-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Scarlett-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.1 or later installed:
```
pip3 install auto-gptq
```
If you have problems installing AutoGPTQ, please build from source instead:
```
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Scarlett-7B-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
# To download from a specific branch, use the revision parameter, as in this example:
# Note that `revision` requires AutoGPTQ 0.3.1 or later!
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Feynman Innovations's Scarlett 7B
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-7b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts.
| 14,915 | [
[
-0.043121337890625,
-0.061004638671875,
0.01004791259765625,
0.0201568603515625,
-0.0130462646484375,
-0.00986480712890625,
0.010986328125,
-0.04180908203125,
0.02301025390625,
0.0250396728515625,
-0.054412841796875,
-0.0243682861328125,
-0.03094482421875,
-... |
TheBloke/Scarlett-13B-GGML | 2023-09-27T13:01:31.000Z | [
"transformers",
"llama",
"en",
"license:other",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Scarlett-13B-GGML | 2 | 2 | transformers | 2023-08-16T08:37:56 | ---
language:
- en
license: other
model_name: Scarlett 13B
inference: false
model_creator: Feynman Innovations
model_link: https://huggingface.co/ajibawa-2023/scarlett-13b
model_type: llama
quantized_by: TheBloke
base_model: ajibawa-2023/scarlett-13b
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Scarlett 13B - GGML
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Scarlett 13B](https://huggingface.co/ajibawa-2023/scarlett-13b)
## Description
This repo contains GGML format model files for [Feynman Innovations's Scarlett 13B](https://huggingface.co/ajibawa-2023/scarlett-13b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Scarlett-13B-GGML)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/scarlett-13b)
## Prompt template: Scarlett
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp as of June 6th, commit `2d43387`.
They should also be compatible with all UIs, libraries and utilities which use GGML.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [scarlett-13b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q2_K.bin) | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [scarlett-13b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [scarlett-13b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [scarlett-13b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [scarlett-13b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.37 GB| 9.87 GB | Original quant method, 4-bit. |
| [scarlett-13b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.17 GB| 10.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [scarlett-13b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [scarlett-13b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [scarlett-13b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.97 GB| 11.47 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [scarlett-13b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.78 GB| 12.28 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [scarlett-13b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [scarlett-13b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [scarlett-13b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q6_K.bin) | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [scarlett-13b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Scarlett-13B-GGML/blob/main/scarlett-13b.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.79 GB| 16.29 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m scarlett-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Feynman Innovations's Scarlett 13B
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-13b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts.
| 13,516 | [
[
-0.042510986328125,
-0.060302734375,
0.0234527587890625,
0.0212249755859375,
-0.01038360595703125,
-0.010162353515625,
0.002002716064453125,
-0.041748046875,
0.033111572265625,
0.00980377197265625,
-0.05572509765625,
-0.034393310546875,
-0.0439453125,
-0.003... |
TheBloke/Scarlett-13B-GPTQ | 2023-09-27T13:01:32.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Scarlett-13B-GPTQ | 5 | 2 | transformers | 2023-08-16T08:37:56 | ---
language:
- en
license: other
model_name: Scarlett 13B
inference: false
model_creator: Feynman Innovations
model_link: https://huggingface.co/ajibawa-2023/scarlett-13b
model_type: llama
quantized_by: TheBloke
base_model: ajibawa-2023/scarlett-13b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Scarlett 13B - GPTQ
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Scarlett 13B](https://huggingface.co/ajibawa-2023/scarlett-13b)
## Description
This repo contains GPTQ model files for [Feynman Innovations's Scarlett 13B](https://huggingface.co/ajibawa-2023/scarlett-13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Scarlett-13B-GGML)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/scarlett-13b)
## Prompt template: Scarlett
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
```
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All GPTQ files are made with AutoGPTQ.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have issues with models that use Act Order plus Group Size.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Scarlett-13B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Scarlett-13B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Scarlett-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Scarlett-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Scarlett-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.1 or later installed:
```
pip3 install auto-gptq
```
If you have problems installing AutoGPTQ, please build from source instead:
```
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Scarlett-13B-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
# To download from a specific branch, use the revision parameter, as in this example:
# Note that `revision` requires AutoGPTQ 0.3.1 or later!
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Feynman Innovations's Scarlett 13B
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-13b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts.
| 14,943 | [
[
-0.04278564453125,
-0.061859130859375,
0.00963592529296875,
0.0202484130859375,
-0.0124053955078125,
-0.0094146728515625,
0.01059722900390625,
-0.042022705078125,
0.0228271484375,
0.0242919921875,
-0.055511474609375,
-0.0237579345703125,
-0.0310516357421875,
... |
piratos/ct2fast-docsgpt-14b | 2023-08-16T11:42:13.000Z | [
"transformers",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | piratos | null | null | piratos/ct2fast-docsgpt-14b | 1 | 2 | transformers | 2023-08-16T09:01:04 | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
This model is fine tuned on top of llama-2-13b
DocsGPT is optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
We used 50k high quality examples to finetune it over 2 days on A10G GPU.
We used lora fine tuning process.
Its an apache-2.0 license so you can use it for commercial purposes too.
# How to run it
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "Arc53/docsgpt-14b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Benchmarks are still WIP
To prepare your prompts make sure you keep this format:
```
### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer
```
Here is an example comparing it to meta-llama/Llama-2-14b
Prompt:
```
### Instruction
Create a mock request to /api/answer in python
### Context
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
`` ` `` (language)
(code)
`` ` ``
----------------
/api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will recieve an answer for a user provided question
`` ` ``
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``
In response you will get a json document like this one:
`` ` ``
{
"answer": " Hi there! How can I help you?\\n",
"query": "Hi",
"result": " Hi there! How can I help you?\\nSOURCES:"
}
`` ` ``
/api/docs_check
It will make sure documentation is loaded on a server (just run it everytime user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
`` ` ``
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``
In response you will get a json document like this one:
`` ` ``
{
"status": "exists"
}
`` ` ``
Issues and Pull requests
We value contributions to our issues in form of discussion or suggestion, we recommend that you check out existing issues and our Roadmap
If you want to contribute by writing code there are few things that you should know before doing it:
We have frontend (React, Vite) and Backend (python)
### Answer
```
docsgpt-14b:
```
import requests
import json
def send_request(question):
url = "http://127.0.0.1:5000/api/answer"
payload = {
"question": question,
"history": None,
"api_key": "YOUR_API_KEY",
"embeddings_key": "YOUR_EMBEDDINGS_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"
}
headers = {
"Content-Type": "application/json; charset=utf-8"
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
answer = response.json()["answer"]
return answer
else:
return "Error: Could not connect to server."
def check_docs_loaded():
url = "http://127.0.0.1:5000/api/docs_check"
payload = {"docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"}
headers = {
"Content-Type": "application/json; charset=utf-8"
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
status = response.json()["status"]
if status == "exists":
return True
else:
return False
else:
return False
def mock_request(question):
if check_docs_loaded():
return send_request(question)
else:
return "Error: Documentation not loaded."
# Example usage
question = "What is the capital of France?"
answer = mock_request(question)
print(answer)
This is a mock request to the /api/answer endpoint using Python. It sends a JSON in the body with the question and other required parameters. The response is then parsed and returned.
```
The original meta-llama/Llama-2-14b:
```
# answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
### Docs_check
# answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
### Installation
# Install dependencies
npm install
# Run the app
npm run start
### License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details
### Contributing
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and
we are always open to pull requests, and help.
### Acknowledgments
Thanks to the following people for their help and support:
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https://github.com/Arc53)
- [@Arc53](https
```
| 8,841 | [
[
-0.03546142578125,
-0.04620361328125,
0.032928466796875,
0.0027980804443359375,
-0.0169219970703125,
0.0092010498046875,
-0.00798797607421875,
-0.030670166015625,
0.037353515625,
0.0203094482421875,
-0.043548583984375,
-0.05255126953125,
-0.033294677734375,
... |
ihanif/speecht5_finetuned_voxpopuli_lt | 2023-08-16T13:42:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"audio",
"text-to-speech",
"lt",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-to-speech | ihanif | null | null | ihanif/speecht5_finetuned_voxpopuli_lt | 0 | 2 | transformers | 2023-08-16T10:33:06 | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- audio
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_lt
results: []
pipeline_tag: text-to-speech
language:
- lt
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_lt
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.4674 | 444.44 | 1000 | 0.5797 |
| 0.4314 | 888.89 | 2000 | 0.5502 |
| 0.4122 | 1333.33 | 3000 | 0.5617 |
| 0.4055 | 1777.78 | 4000 | 0.5631 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 | 1,680 | [
[
-0.0312042236328125,
-0.040679931640625,
-0.00278472900390625,
0.0067901611328125,
-0.0204010009765625,
-0.02313232421875,
-0.01373291015625,
-0.00853729248046875,
-0.01126861572265625,
0.0205535888671875,
-0.048248291015625,
-0.049652099609375,
-0.0430908203125... |
juliussteen/DeBERTa-v3-FaithAug | 2023-08-16T16:33:44.000Z | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"dataset:anli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | juliussteen | null | null | juliussteen/DeBERTa-v3-FaithAug | 1 | 2 | transformers | 2023-08-16T15:03:29 | ---
license: mit
datasets:
- anli
library_name: transformers
---
This is a Deberta-v3 large NLI model that has been finetuned on augmented ANLI data to improve performance for faithfulness evaluation, particularly in a knowledge-grounded dialog setting.
See https://aclanthology.org/2023.acl-short.79 for details. The model itself is based on https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli
For a convenient interface for scoring see our repository: https://github.com/julmaxi/with_a_little_push
Note that this is not the original model used in the paper but a retrained variant that has slightly better average performance.
| 663 | [
[
-0.0028476715087890625,
-0.07208251953125,
0.033233642578125,
0.01517486572265625,
-0.00864410400390625,
-0.0234222412109375,
0.006618499755859375,
-0.049896240234375,
0.0330810546875,
0.038909912109375,
-0.03216552734375,
-0.0253448486328125,
-0.026443481445312... |
kyujinpy/Tune-A-VideKO-disney | 2023-08-17T11:04:46.000Z | [
"diffusers",
"tune-a-video",
"text-to-video",
"korean",
"arxiv:2212.11565",
"arxiv:2112.10752",
"license:creativeml-openrail-m",
"diffusers:TuneAVideoPipeline",
"region:us"
] | text-to-video | kyujinpy | null | null | kyujinpy/Tune-A-VideKO-disney | 3 | 2 | diffusers | 2023-08-16T19:13:40 | ---
license: creativeml-openrail-m
base_model: kyujinpy/Tune-A-VideKO-disney
training_prompt: A bear is playing guitar
tags:
- tune-a-video
- text-to-video
- diffusers
- korean
inference: false
---
# Tune-A-VideKO-anything
Github: [Kyujinpy/Tune-A-VideKO](https://github.com/KyujinHan/Tune-A-VideKO)
## Model Description
- Base model: [kyujinpy/KO-stable-diffusion-disney](https://huggingface.co/kyujinpy/KO-stable-diffusion-disney)
- Training prompt: A bear is playing guitar

## Samples

Test prompt: 토끼가 기타를 치고 있습니다, 모던한 디즈니 스타일

Test prompt: 잘생긴 왕자가 기타를 치고 있습니다, 모던한 디즈니 스타일

Test prompt: 사자가 기타를 치고 있습니다, 모던한 디즈니 스타일
## Usage
Clone the github repo
```bash
git clone https://github.com/showlab/Tune-A-Video.git
```
Run inference code
```python
from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
from tuneavideo.models.unet import UNet3DConditionModel
from tuneavideo.util import save_videos_grid
import torch
pretrained_model_path = "kyujinpy/KO-stable-diffusion-disney"
unet_model_path = "kyujinpy/Tune-A-VideKO-disney"
unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
prompt = "사자가 기타를 치고 있습니다, 모던한 디즈니 스타일"
video = pipe(prompt, video_length=14, height=512, width=512, num_inference_steps=50, guidance_scale=7.5).videos
save_videos_grid(video, f"./{prompt}.gif")
```
## Related Papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
| 1,898 | [
[
-0.04449462890625,
-0.06304931640625,
0.02032470703125,
0.025604248046875,
-0.0284881591796875,
-0.007724761962890625,
-0.0187530517578125,
0.0081329345703125,
0.01476287841796875,
0.0241241455078125,
-0.049041748046875,
-0.042083740234375,
-0.043304443359375,
... |
OpenAssistant/falcon-40b-megacode2-oasst | 2023-08-18T09:06:07.000Z | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"dataset:OpenAssistant/oasst1",
"dataset:rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | OpenAssistant | null | null | OpenAssistant/falcon-40b-megacode2-oasst | 6 | 2 | transformers | 2023-08-17T01:30:35 | ---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
---
# falcon-40b-megacode2-oasst
- wandb: stage 1: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40), stage 2: [run38_megacode_oasst_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run38_megacode_oasst_falcon40)
- sampling report: [2023-08-17_OpenAssistant_falcon-40b-megacode2-oasst_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-17_OpenAssistant_falcon-40b-megacode2-oasst_sampling_noprefix2.json)
- stage 1 model: [andreaskoepf/falcon-40b-megacode2](https://huggingface.co/andreaskoepf/falcon-40b-megacode2)
## Prompt Template
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```
### Credits & Special Thanks
- Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/)
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
- [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored).
- [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.
| 1,732 | [
[
-0.049652099609375,
-0.059051513671875,
0.01306915283203125,
0.019195556640625,
-0.00040078163146972656,
-0.01165771484375,
-0.0028209686279296875,
-0.0220947265625,
0.0271148681640625,
0.034881591796875,
-0.053131103515625,
-0.0281829833984375,
-0.0377807617187... |
Villian7/HumanEmotions | 2023-10-20T00:14:52.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Villian7 | null | null | Villian7/HumanEmotions | 0 | 2 | transformers | 2023-08-17T03:07:06 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: HumanEmotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259588893157831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HumanEmotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8448 | 1.0 | 250 | 0.3310 | 0.906 | 0.9033 |
| 0.2576 | 2.0 | 500 | 0.2155 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,826 | [
[
-0.034637451171875,
-0.03265380859375,
0.01052093505859375,
0.01947021484375,
-0.0267791748046875,
-0.021331787109375,
-0.007785797119140625,
-0.01085662841796875,
0.01259613037109375,
0.008087158203125,
-0.05914306640625,
-0.050079345703125,
-0.05377197265625,
... |
HangenYuu/xlm-roberta-large-finetuned-hate-implicit | 2023-08-17T14:33:17.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | HangenYuu | null | null | HangenYuu/xlm-roberta-large-finetuned-hate-implicit | 0 | 2 | transformers | 2023-08-17T10:33:54 | ---
license: mit
base_model: joeddav/xlm-roberta-large-xnli
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-finetuned-hate-implicit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-hate-implicit
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6051
- eval_accuracy: 0.7768
- eval_f1: 0.7721
- eval_runtime: 107.6127
- eval_samples_per_second: 39.921
- eval_steps_per_second: 0.316
- epoch: 3.98
- step: 537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,352 | [
[
-0.0301971435546875,
-0.04864501953125,
0.0286865234375,
0.00949859619140625,
-0.0216522216796875,
-0.033447265625,
-0.0224456787109375,
-0.0283050537109375,
0.00522613525390625,
0.0310821533203125,
-0.053558349609375,
-0.047607421875,
-0.048309326171875,
-0... |
Prikshit7766/en_pipeline | 2023-08-17T11:54:05.000Z | [
"spacy",
"text-classification",
"en",
"region:us"
] | text-classification | Prikshit7766 | null | null | Prikshit7766/en_pipeline | 0 | 2 | spacy | 2023-08-17T11:53:22 | ---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_pipeline
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `POS`, `NEG` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 87.23 |
| `CATS_MICRO_P` | 87.24 |
| `CATS_MICRO_R` | 87.24 |
| `CATS_MICRO_F` | 87.24 |
| `CATS_MACRO_P` | 87.29 |
| `CATS_MACRO_R` | 87.24 |
| `CATS_MACRO_F` | 87.23 |
| `CATS_MACRO_AUC` | 93.90 |
| `TEXTCAT_LOSS` | 1499.40 | | 897 | [
[
-0.04730224609375,
-0.0181427001953125,
0.0220794677734375,
0.020843505859375,
-0.049346923828125,
0.023193359375,
0.0060577392578125,
-0.0026416778564453125,
0.0631103515625,
0.048553466796875,
-0.0528564453125,
-0.06866455078125,
-0.048187255859375,
0.0059... |
vikneshk/finetune_small_imdb_sentiment | 2023-08-17T14:24:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | vikneshk | null | null | vikneshk/finetune_small_imdb_sentiment | 0 | 2 | transformers | 2023-08-17T14:03:25 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetune_small_imdb_sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9083333333333333
- name: F1
type: f1
value: 0.9084249084249084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_small_imdb_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2615
- Accuracy: 0.9083
- F1: 0.9084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,580 | [
[
-0.043212890625,
-0.042724609375,
0.01041412353515625,
0.0059967041015625,
-0.03765869140625,
-0.016510009765625,
-0.0101776123046875,
-0.00278472900390625,
0.0158233642578125,
0.02227783203125,
-0.05584716796875,
-0.03973388671875,
-0.06219482421875,
-0.003... |
TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ | 2023-09-27T12:45:51.000Z | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ | 3 | 2 | transformers | 2023-08-17T14:07:00 | ---
license: other
model_name: Llama2 13B MegaCode2 OASST
base_model: OpenAssistant/llama2-13b-megacode2-oasst
inference: false
model_creator: OpenAssistant
model_type: llama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 13B MegaCode2 OASST - GPTQ
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [Llama2 13B MegaCode2 OASST](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst)
<!-- description start -->
## Description
This repo contains GPTQ model files for [OpenAssistant's Llama2 13B MegaCode2 OASST](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGUF)
* [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenAssistant's Llama2 13B MegaCode2 OASST](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama2-13B-MegaCode2-OASST-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: OpenAssistant's Llama2 13B MegaCode2 OASST
# llama2-13b-megacode2-oasst
- sampling report: [2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json)
### Prompt template
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```
### Credits & Special Thanks
- Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/)
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
- [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored)
- [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.
| 17,147 | [
[
-0.034637451171875,
-0.057281494140625,
0.01528167724609375,
0.01294708251953125,
-0.0311431884765625,
-0.0085906982421875,
0.005069732666015625,
-0.04400634765625,
0.016448974609375,
0.029571533203125,
-0.04425048828125,
-0.042449951171875,
-0.03228759765625,
... |
TheBloke/Llama2-13B-MegaCode2-OASST-GGML | 2023-09-27T13:01:37.000Z | [
"transformers",
"llama",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Llama2-13B-MegaCode2-OASST-GGML | 6 | 2 | transformers | 2023-08-17T14:07:00 | ---
license: llama2
model_name: Llama2 13B MegaCode2 OASST
inference: false
model_creator: OpenAssistant
model_link: https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst
model_type: llama
quantized_by: TheBloke
base_model: OpenAssistant/llama2-13b-megacode2-oasst
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 13B MegaCode2 OASST - GGML
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [Llama2 13B MegaCode2 OASST](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst)
## Description
This repo contains GGML format model files for [OpenAssistant's Llama2 13B MegaCode2 OASST](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML)
* [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst)
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-13b-megacode2-oasst.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama2-13b-megacode2-oasst.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama2-13b-megacode2-oasst.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-13b-megacode2-oasst.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-13b-megacode2-oasst.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| [llama2-13b-megacode2-oasst.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama2-13b-megacode2-oasst.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama2-13b-megacode2-oasst.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama2-13b-megacode2-oasst.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama2-13b-megacode2-oasst.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama2-13b-megacode2-oasst.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llama2-13b-megacode2-oasst.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llama2-13b-megacode2-oasst.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llama2-13b-megacode2-oasst.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama2-13B-MegaCode2-OASST-GGML/blob/main/llama2-13b-megacode2-oasst.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llama2-13b-megacode2-oasst.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\nYou are a story writing assistant.<|im_end|>\n<|im_start|>user\nWrite a story about llamas<|im_end|>\n<|im_start|>assistant"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: OpenAssistant's Llama2 13B MegaCode2 OASST
# llama2-13b-megacode2-oasst
- sampling report: [2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json)
### Prompt template
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```
### Credits & Special Thanks
- Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/)
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
- [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored)
- [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.
| 15,841 | [
[
-0.040740966796875,
-0.062103271484375,
0.0277862548828125,
0.0177001953125,
-0.0284271240234375,
-0.006275177001953125,
-0.006244659423828125,
-0.046112060546875,
0.02777099609375,
0.00555419921875,
-0.045013427734375,
-0.042022705078125,
-0.03857421875,
0.... |
asas-ai/opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en | 2023-09-17T16:24:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | asas-ai | null | null | asas-ai/opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en | 0 | 2 | transformers | 2023-08-17T15:36:11 | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-ar-en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en
results: []
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8682
- Bleu: 63.4498
- Gen Len: 59.457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.9549 | 1.0 | 1105 | 1.2644 | 43.0637 | 61.33 |
| 0.7674 | 2.0 | 2210 | 1.0862 | 51.6055 | 60.714 |
| 0.6736 | 3.0 | 3315 | 0.9910 | 56.1642 | 60.434 |
| 0.6011 | 4.0 | 4420 | 0.9463 | 59.6059 | 59.682 |
| 0.5543 | 5.0 | 5525 | 0.9158 | 61.101 | 59.493 |
| 0.5176 | 6.0 | 6630 | 0.8961 | 61.9065 | 59.468 |
| 0.4849 | 7.0 | 7735 | 0.8840 | 62.6833 | 59.5 |
| 0.4692 | 8.0 | 8840 | 0.8727 | 63.0766 | 59.425 |
| 0.464 | 9.0 | 9945 | 0.8709 | 63.3354 | 59.454 |
| 0.4486 | 10.0 | 11050 | 0.8682 | 63.4498 | 59.457 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 | 2,170 | [
[
-0.039581298828125,
-0.04541015625,
0.0092010498046875,
0.00991058349609375,
-0.0189361572265625,
-0.0246734619140625,
-0.00977325439453125,
-0.0160064697265625,
0.0250396728515625,
0.031524658203125,
-0.05712890625,
-0.046875,
-0.04925537109375,
-0.00037598... |
wlaminack/testingmodel | 2023-09-14T18:56:34.000Z | [
"sklearn",
"joblib",
"tabular-classification",
"dataset:wine-quality",
"dataset:lvwerra/red-wine",
"region:us"
] | tabular-classification | wlaminack | null | null | wlaminack/testingmodel | 0 | 2 | sklearn | 2023-08-17T16:47:55 | ---
tags:
- tabular-classification
- sklearn
datasets:
- wine-quality
- lvwerra/red-wine
widget:
structuredData:
fixed_acidity:
- 7.4
- 7.8
- 10.3
volatile_acidity:
- 0.7
- 0.88
- 0.32
citric_acid:
- 0
- 0
- 0.45
residual_sugar:
- 1.9
- 2.6
- 6.4
chlorides:
- 0.076
- 0.098
- 0.073
free_sulfur_dioxide:
- 11
- 25
- 5
total_sulfur_dioxide:
- 34
- 67
- 13
density:
- 0.9978
- 0.9968
- 0.9976
pH:
- 3.51
- 3.2
- 3.23
sulphates:
- 0.56
- 0.68
- 0.82
alcohol:
- 9.4
- 9.8
- 12.6
---
## Wine Quality classification clone for testing
### A Simple Example of Scikit-learn Pipeline
> Inspired by https://towardsdatascience.com/a-simple-example-of-pipeline-in-machine-learning-with-scikit-learn-e726ffbb6976 by Saptashwa Bhattacharyya
### How to use
```python
from huggingface_hub import hf_hub_url, cached_download
import joblib
import pandas as pd
REPO_ID = "wlaminack/testingmodel"
FILENAME = "sklearn_model.joblib"
model = joblib.load(cached_download(
hf_hub_url(REPO_ID, FILENAME)
))
# model is a `sklearn.pipeline.Pipeline`
```
#### Get sample data from this repo
```python
data_file = cached_download(
hf_hub_url(REPO_ID, "winequality-red.csv")
)
winedf = pd.read_csv(data_file, sep=";")
X = winedf.drop(["quality"], axis=1)
Y = winedf["quality"]
print(X[:3])
```
| | fixed acidity | volatile acidity | citric acid | residual sugar | chlorides | free sulfur dioxide | total sulfur dioxide | density | pH | sulphates | alcohol |
|---:|----------------:|-------------------:|--------------:|-----------------:|------------:|----------------------:|-----------------------:|----------:|-----:|------------:|----------:|
| 0 | 7.4 | 0.7 | 0 | 1.9 | 0.076 | 11 | 34 | 0.9978 | 3.51 | 0.56 | 9.4 |
| 1 | 7.8 | 0.88 | 0 | 2.6 | 0.098 | 25 | 67 | 0.9968 | 3.2 | 0.68 | 9.8 |
| 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15 | 54 | 0.997 | 3.26 | 0.65 | 9.8 |
#### Get your prediction
```python
labels = model.predict(X[:3])
# [5, 5, 5]
```
#### Eval
```python
model.score(X, Y)
# 0.6616635397123202
```
### 🍷 Disclaimer
No red wine was drunk (unfortunately) while training this model 🍷
| 2,709 | [
[
-0.0164031982421875,
-0.0233001708984375,
0.0033321380615234375,
0.0139312744140625,
-0.0162506103515625,
-0.0032444000244140625,
0.007843017578125,
-0.01105499267578125,
0.03070068359375,
0.034423828125,
-0.039093017578125,
-0.047119140625,
-0.033935546875,
... |
digitalpipelines/llama2_7b_chat_uncensored-GPTQ | 2023-08-18T14:02:31.000Z | [
"transformers",
"llama",
"text-generation",
"digitalpipelines",
"dataset:wikitext",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | digitalpipelines | null | null | digitalpipelines/llama2_7b_chat_uncensored-GPTQ | 0 | 2 | transformers | 2023-08-17T18:44:15 | ---
license: apache-2.0
datasets:
- wikitext
tags:
- digitalpipelines
---
# Overview
quantized GPTQ model of [digitalpipelines/llama2_7b_chat_uncensored](https://huggingface.co/digitalpipelines/llama2_7b_chat_uncensored)
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
``` | 410 | [
[
-0.0189056396484375,
-0.07623291015625,
0.03173828125,
0.01398468017578125,
-0.05706787109375,
0.01522064208984375,
0.01094818115234375,
-0.0125274658203125,
0.01160430908203125,
0.034759521484375,
-0.0531005859375,
-0.04510498046875,
-0.049163818359375,
0.0... |
kadriu/shqip-mms-3 | 2023-09-28T18:01:48.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | kadriu | null | null | kadriu/shqip-mms-3 | 0 | 2 | transformers | 2023-08-17T22:33:52 | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: shqip-mms-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shqip-mms-3
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4195
- Wer: 0.3447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5717 | 0.31 | 300 | 0.9548 | 0.7373 |
| 0.7823 | 0.63 | 600 | 0.7372 | 0.5886 |
| 0.6863 | 0.94 | 900 | 0.6122 | 0.5261 |
| 0.5908 | 1.25 | 1200 | 0.5215 | 0.4440 |
| 0.5237 | 1.57 | 1500 | 0.4693 | 0.3987 |
| 0.4662 | 1.88 | 1800 | 0.4195 | 0.3447 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.1.0.dev20230810
- Datasets 2.14.3
- Tokenizers 0.13.3
| 1,679 | [
[
-0.0287017822265625,
-0.03961181640625,
0.005779266357421875,
0.01299285888671875,
-0.0233917236328125,
-0.03277587890625,
0.0016632080078125,
-0.00794219970703125,
0.015899658203125,
0.0206298828125,
-0.0645751953125,
-0.04345703125,
-0.0509033203125,
-0.00... |
doorinsight1/house-image-classification-ft | 2023-08-18T05:19:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | doorinsight1 | null | null | doorinsight1/house-image-classification-ft | 0 | 2 | transformers | 2023-08-18T05:19:34 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: house-image-classification-ft
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7142857313156128
---
# house-image-classification-ft
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### back exterior of house

#### front exterior of house

#### inside of house
 | 930 | [
[
-0.051483154296875,
-0.056549072265625,
0.006603240966796875,
0.034759521484375,
-0.029693603515625,
0.01262664794921875,
0.01320648193359375,
-0.027740478515625,
0.034210205078125,
0.0127410888671875,
-0.0162200927734375,
-0.06060791015625,
-0.04412841796875,
... |
momolee/low_light_image_recognition | 2023-08-18T05:39:04.000Z | [
"keras",
"region:us"
] | null | momolee | null | null | momolee/low_light_image_recognition | 0 | 2 | keras | 2023-08-18T05:38:43 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | SGD |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.009999999776482582 |
| momentum | 0.0 |
| nesterov | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 802 | [
[
-0.0279388427734375,
-0.035675048828125,
0.0174407958984375,
0.0195159912109375,
-0.047760009765625,
-0.0238189697265625,
0.0116424560546875,
0.00846099853515625,
0.0205841064453125,
0.0367431640625,
-0.045562744140625,
-0.055999755859375,
-0.03839111328125,
... |
Andyrasika/my-awesome-setfit-model | 2023-08-18T10:28:01.000Z | [
"transformers",
"pytorch",
"mpnet",
"feature-extraction",
"setfit",
"sentence-transformers",
"text-classification",
"en",
"dataset:PolyAI/banking77",
"arxiv:2209.11055",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Andyrasika | null | null | Andyrasika/my-awesome-setfit-model | 1 | 2 | transformers | 2023-08-18T06:58:45 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
datasets:
- PolyAI/banking77
language:
- en
metrics:
- accuracy
library_name: transformers
---
# Andyrasika/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Andyrasika/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | 1,647 | [
[
-0.0163421630859375,
-0.06182861328125,
0.026123046875,
-0.01995849609375,
-0.0147552490234375,
-0.02032470703125,
-0.0153656005859375,
-0.0138092041015625,
0.0102386474609375,
0.03179931640625,
-0.044586181640625,
-0.02423095703125,
-0.036224365234375,
0.01... |
bhagasra-saurav/bert-base-uncased-finetuned-char-hangman | 2023-08-18T08:12:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | bhagasra-saurav | null | null | bhagasra-saurav/bert-base-uncased-finetuned-char-hangman | 0 | 2 | transformers | 2023-08-18T06:59:03 | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-char-hangman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-char-hangman
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.985 | 0.59 | 500 | 1.7507 |
| 1.7115 | 1.18 | 1000 | 1.6289 |
| 1.6265 | 1.78 | 1500 | 1.5502 |
| 1.5716 | 2.37 | 2000 | 1.5237 |
| 1.5265 | 2.96 | 2500 | 1.4812 |
| 1.498 | 3.55 | 3000 | 1.4562 |
| 1.4648 | 4.15 | 3500 | 1.4246 |
| 1.4463 | 4.74 | 4000 | 1.3875 |
| 1.4215 | 5.33 | 4500 | 1.3697 |
| 1.4076 | 5.92 | 5000 | 1.3530 |
| 1.3901 | 6.52 | 5500 | 1.3404 |
| 1.3767 | 7.11 | 6000 | 1.3270 |
| 1.3631 | 7.7 | 6500 | 1.3126 |
| 1.3573 | 8.29 | 7000 | 1.3212 |
| 1.3488 | 8.89 | 7500 | 1.3162 |
| 1.3397 | 9.48 | 8000 | 1.3135 |
| 1.3318 | 10.07 | 8500 | 1.2941 |
| 1.336 | 10.66 | 9000 | 1.2842 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,200 | [
[
-0.040313720703125,
-0.03765869140625,
0.0103302001953125,
0.003452301025390625,
-0.0191192626953125,
-0.02655029296875,
-0.01079559326171875,
-0.015533447265625,
0.0178070068359375,
0.0283203125,
-0.05596923828125,
-0.044891357421875,
-0.045318603515625,
-0... |
stanfordnlp/stanza-qaf | 2023-10-02T23:44:29.000Z | [
"stanza",
"token-classification",
"qaf",
"license:apache-2.0",
"region:us"
] | token-classification | stanfordnlp | null | null | stanfordnlp/stanza-qaf | 0 | 2 | stanza | 2023-08-18T07:07:21 | ---
tags:
- stanza
- token-classification
library_name: stanza
language: qaf
license: apache-2.0
---
# Stanza model for Maghrebi_Arabic_French (qaf)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-10-02 23:44:26.392
| 697 | [
[
-0.035675048828125,
-0.0379638671875,
0.016876220703125,
0.03887939453125,
-0.01450347900390625,
-0.0035076141357421875,
0.0112457275390625,
-0.03369140625,
0.010498046875,
0.0271148681640625,
-0.048126220703125,
-0.039215087890625,
-0.029449462890625,
-0.00... |
stanfordnlp/stanza-qpm | 2023-10-02T23:44:33.000Z | [
"stanza",
"token-classification",
"qpm",
"license:apache-2.0",
"region:us"
] | token-classification | stanfordnlp | null | null | stanfordnlp/stanza-qpm | 0 | 2 | stanza | 2023-08-18T07:07:29 | ---
tags:
- stanza
- token-classification
library_name: stanza
language: qpm
license: apache-2.0
---
# Stanza model for Pomak (qpm)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-10-02 23:44:30.109
| 680 | [
[
-0.02545166015625,
-0.0391845703125,
0.027557373046875,
0.02606201171875,
-0.037872314453125,
-0.01541900634765625,
0.003887176513671875,
-0.0265960693359375,
0.00890350341796875,
0.036224365234375,
-0.049835205078125,
-0.0491943359375,
-0.02935791015625,
-0... |
jerome1519/flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51 | 2023-08-18T07:52:20.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | jerome1519 | null | null | jerome1519/flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51 | 0 | 2 | transformers | 2023-08-18T07:51:24 | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | nan | 10.5263 | 8.1081 | 10.5263 | 10.5263 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,457 | [
[
-0.029388427734375,
-0.034881591796875,
0.007167816162109375,
0.0102386474609375,
-0.02001953125,
-0.0261688232421875,
-0.01007843017578125,
-0.0236358642578125,
0.00876617431640625,
0.0293731689453125,
-0.047760009765625,
-0.049652099609375,
-0.045684814453125,... |
jerome1519/t5-small-finetuned-coding_instructions_2023_08_18__08_41 | 2023-08-18T08:43:22.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | jerome1519 | null | null | jerome1519/t5-small-finetuned-coding_instructions_2023_08_18__08_41 | 0 | 2 | transformers | 2023-08-18T08:41:17 | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-coding_instructions_2023_08_18__08_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-coding_instructions_2023_08_18__08_41
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9209
- Rouge1: 13.9516
- Rouge2: 6.1527
- Rougel: 13.1037
- Rougelsum: 13.1244
- Gen Len: 18.3077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | 2.6656 | 8.6104 | 3.1562 | 8.1185 | 8.1422 | 19.0 |
| No log | 2.0 | 10 | 2.5149 | 9.7852 | 3.836 | 9.3185 | 9.3322 | 19.0 |
| No log | 3.0 | 15 | 2.3683 | 13.1134 | 5.2015 | 12.1364 | 12.2677 | 19.0 |
| No log | 4.0 | 20 | 2.2032 | 13.4182 | 5.1369 | 12.5255 | 12.6118 | 19.0 |
| No log | 5.0 | 25 | 2.0986 | 13.6902 | 5.3556 | 12.7848 | 12.898 | 19.0 |
| No log | 6.0 | 30 | 2.0232 | 12.7675 | 4.8786 | 11.9464 | 11.9539 | 18.3846 |
| No log | 7.0 | 35 | 1.9857 | 13.9444 | 6.1527 | 13.0926 | 13.1171 | 18.5385 |
| No log | 8.0 | 40 | 1.9526 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.5385 |
| No log | 9.0 | 45 | 1.9303 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.3077 |
| No log | 10.0 | 50 | 1.9209 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.3077 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,517 | [
[
-0.03558349609375,
-0.033203125,
0.0161895751953125,
0.004436492919921875,
-0.0166473388671875,
-0.019866943359375,
-0.0029277801513671875,
-0.01284027099609375,
0.02532958984375,
0.019287109375,
-0.0467529296875,
-0.05694580078125,
-0.04962158203125,
-0.006... |
harvinder676/ner-distillbert-ner-tags | 2023-08-18T09:24:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | harvinder676 | null | null | harvinder676/ner-distillbert-ner-tags | 0 | 2 | transformers | 2023-08-18T09:17:29 | ---
license: apache-2.0
base_model: harvinder676/ner-distillbert-ner-tags
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-distillbert-ner-tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-distillbert-ner-tags
This model is a fine-tuned version of [harvinder676/ner-distillbert-ner-tags](https://huggingface.co/harvinder676/ner-distillbert-ner-tags) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1387
- Precision: 0.8521
- Recall: 0.8574
- F1: 0.8547
- Accuracy: 0.9710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.1262 | 0.8023 | 0.8589 | 0.8296 | 0.9666 |
| No log | 2.0 | 26 | 0.1178 | 0.8262 | 0.8313 | 0.8287 | 0.9673 |
| No log | 3.0 | 39 | 0.1244 | 0.8165 | 0.8328 | 0.8246 | 0.9671 |
| No log | 4.0 | 52 | 0.1280 | 0.8275 | 0.8313 | 0.8294 | 0.9683 |
| No log | 5.0 | 65 | 0.1338 | 0.8410 | 0.8436 | 0.8423 | 0.9685 |
| No log | 6.0 | 78 | 0.1378 | 0.8346 | 0.8589 | 0.8466 | 0.9688 |
| No log | 7.0 | 91 | 0.1359 | 0.8452 | 0.8543 | 0.8497 | 0.9706 |
| No log | 8.0 | 104 | 0.1336 | 0.8416 | 0.8635 | 0.8524 | 0.9707 |
| No log | 9.0 | 117 | 0.1376 | 0.8472 | 0.8589 | 0.8530 | 0.9709 |
| No log | 10.0 | 130 | 0.1387 | 0.8521 | 0.8574 | 0.8547 | 0.9710 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 2,426 | [
[
-0.040496826171875,
-0.037628173828125,
0.00655364990234375,
-0.0020771026611328125,
-0.0084075927734375,
-0.014862060546875,
0.005615234375,
-0.009918212890625,
0.03369140625,
0.019287109375,
-0.048370361328125,
-0.054168701171875,
-0.05224609375,
-0.011283... |
leofn3/modelo_racismo | 2023-09-02T17:01:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | text-classification | leofn3 | null | null | leofn3/modelo_racismo | 0 | 2 | transformers | 2023-08-18T14:11:56 | ---
license: other
base_model: PORTULAN/albertina-ptbr
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: modelo_racismo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo_racismo
This model is a fine-tuned version of [PORTULAN/albertina-ptbr](https://huggingface.co/PORTULAN/albertina-ptbr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- Accuracy: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 468 | 0.2304 | 0.9583 |
| 0.7037 | 2.0 | 936 | 0.0847 | 0.9840 |
| 0.256 | 3.0 | 1404 | 0.0075 | 0.9979 |
| 0.0759 | 4.0 | 1872 | 0.0036 | 0.9989 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,549 | [
[
-0.036407470703125,
-0.03857421875,
0.00044608116149902344,
0.022064208984375,
-0.027801513671875,
-0.022918701171875,
-0.0018262863159179688,
-0.0240020751953125,
0.01287841796875,
0.0260009765625,
-0.044525146484375,
-0.049346923828125,
-0.051910400390625,
... |
Francesco-A/finetuned-kde4-en-to-fr | 2023-08-18T19:25:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | Francesco-A | null | null | Francesco-A/finetuned-kde4-en-to-fr | 0 | 2 | transformers | 2023-08-18T15:52:44 | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88529894542656
---
# Model description (finetuned-kde4-en-to-fr)
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8853
## Intended uses
- Translation of English text to French
- Generating coherent and accurate translations in the domain of technical computer science
## Limitations
- The model's performance may degrade when translating sentences with complex or domain-specific terminology that was not present in the training data.
- It may struggle with idiomatic expressions and cultural nuances that are not captured in the training data.
## Training and evaluation data
The model was fine-tuned on the KDE4 dataset, which consists of pairs of sentences in English and their French translations. The dataset contains 189,155 pairs for training and 21,018 pairs for validation.
## Training procedure
The model was trained using the Seq2SeqTrainer API from the 🤗 Transformers library. The training procedure involved tokenizing the input English sentences and target French sentences, preparing the data collation for dynamic batching and fine-tuning the model. The evaluation metric used is *SacreBLEU*.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training details
Here's the data presented in a table format:
| Step | Training Loss |
|--------|---------------|
| 500 | 1.423400 |
| 1000 | 1.233600 |
| 1500 | 1.184600 |
| 2000 | 1.125000 |
| 2500 | 1.113000 |
| 3000 | 1.070500 |
| 3500 | 1.063300 |
| 4000 | 1.031900 |
| 4500 | 1.017900 |
| 5000 | 1.008200 |
| 5500 | 1.002500 |
| 6000 | 0.973900 |
| 6500 | 0.907700 |
| 7000 | 0.920600 |
| 7500 | 0.905000 |
| 8000 | 0.900300 |
| 8500 | 0.888500 |
| 9000 | 0.892000 |
| 9500 | 0.881200 |
| 10000 | 0.890200 |
| 10500 | 0.881500 |
| 11000 | 0.876800 |
| 11500 | 0.861000 |
| 12000 | 0.854800 |
| 12500 | 0.819500 |
| 13000 | 0.818100 |
| 13500 | 0.827400 |
| 14000 | 0.806400 |
| 14500 | 0.811000 |
| 15000 | 0.815600 |
| 15500 | 0.818500 |
| 16000 | 0.804800 |
| 16500 | 0.827200 |
| 17000 | 0.808300 |
| 17500 | 0.807600 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 3,194 | [
[
-0.0271453857421875,
-0.047271728515625,
0.0139312744140625,
0.015655517578125,
-0.01166534423828125,
-0.017486572265625,
-0.01160430908203125,
-0.012451171875,
0.006378173828125,
0.04046630859375,
-0.05389404296875,
-0.04632568359375,
-0.045806884765625,
0.... |
xzuyn/GPT-2-Small-Stripped | 2023-08-18T20:35:20.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | xzuyn | null | null | xzuyn/GPT-2-Small-Stripped | 0 | 2 | transformers | 2023-08-18T19:43:37 | ---
language:
- en
library_name: transformers
---
This is just an experiment to see if a script of mine works.
GPT-2-Small, but I removed all layers in between `h.0.mlp.c_proj.bias` and `ln_f.weight`.
These are the only layers/tensors left (I'm unsure of the terminology for these):
```
wte.weight
wpe.weight
h.0.ln_1.weight
h.0.ln_1.bias
h.0.attn.bias
h.0.attn.c_attn.weight
h.0.attn.c_attn.bias
h.0.attn.c_proj.weight
h.0.attn.c_proj.bias
h.0.ln_2.weight
h.0.ln_2.bias
h.0.mlp.c_fc.weight
h.0.mlp.c_fc.bias
h.0.mlp.c_proj.weight
h.0.mlp.c_proj.bias
ln_f.weight
ln_f.bias
```
| 579 | [
[
-0.031982421875,
-0.056304931640625,
0.0528564453125,
0.011566162109375,
-0.015899658203125,
-0.0144500732421875,
-0.00719451904296875,
-0.002582550048828125,
0.032257080078125,
0.0301361083984375,
-0.04150390625,
-0.0298004150390625,
-0.0579833984375,
-0.01... |
barbieheimer/MND_TweetEvalBert_model | 2023-08-22T12:40:58.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tweet_eval",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | barbieheimer | null | null | barbieheimer/MND_TweetEvalBert_model | 1 | 2 | transformers | 2023-08-19T05:33:08 | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- tweet_eval
model-index:
- name: MND_TweetEvalBert_model
results: []
language:
- en
pipeline_tag: text-classification
metrics:
- accuracy
widget:
- text: 'I loved Barbie and Oppenheimer'
example_title: Barbenheimer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MND_TweetEvalBert_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7241
## Model description
This is how to use the model with the transformer library to do a text classification task.
This model was trained and built for sentiment analysis with a text classification model architecture.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
model = AutoModelForSequenceClassification.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
# We can now use the model in the pipeline.
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Get some text to fool around with for a basic test.
text = "I loved Oppenheimer and Barbie "
classifier(text) # Let's see if the model works on our example text.
```
```
[{'label': 'JOY', 'score': 0.9845513701438904}]
```
## Training Evalutation Results
```python
{'eval_loss': 0.7240552306175232,
'eval_runtime': 3.7803,
'eval_samples_per_second': 375.896,
'eval_steps_per_second': 23.543,
'epoch': 5.0}
```
## Overall Model Evaluation Results
```python
{'accuracy': {'confidence_interval': (0.783, 0.832),
'standard_error': 0.01241992329458207,
'score': 0.808},
'total_time_in_seconds': 150.93268656500004,
'samples_per_second': 6.625470087086432,
'latency_in_seconds': 0.15093268656500003}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
```python
{'training_loss'=0.3821827131159165}
{'train_runtime': 174.1546, 'train_samples_per_second': 93.509,
'train_steps_per_second': 5.857, 'total_flos': 351397804992312.0,
'train_loss': 0.3821827131159165, 'epoch': 5.0}
```
```
Step: 500
{training loss: 0.607100}
Step: 1000
{training loss: 0.169000}
```
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 | 2,810 | [
[
-0.0214385986328125,
-0.04022216796875,
0.01490020751953125,
0.0174407958984375,
-0.007236480712890625,
-0.01418304443359375,
-0.0168609619140625,
-0.0035724639892578125,
0.00919342041015625,
0.0103302001953125,
-0.040679931640625,
-0.05096435546875,
-0.06054687... |
Youngwoo9/FlanPyeongsan | 2023-08-19T05:52:26.000Z | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"ko",
"dataset:Youngwoo9/autotrain-data-flanpyeongsan",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | Youngwoo9 | null | null | Youngwoo9/FlanPyeongsan | 0 | 2 | transformers | 2023-08-19T05:42:27 | ---
tags:
- autotrain
- summarization
language:
- ko
widget:
- text: "I love AutoTrain"
datasets:
- Youngwoo9/autotrain-data-flanpyeongsan
co2_eq_emissions:
emissions: 5.58437237268958
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 83363142260
- CO2 Emissions (in grams): 5.5844
## Validation Metrics
- Loss: 0.407
- Rouge1: 13.408
- Rouge2: 2.256
- RougeL: 13.360
- RougeLsum: 13.291
- Gen Len: 18.766
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Youngwoo9/autotrain-flanpyeongsan-83363142260
``` | 720 | [
[
-0.033203125,
-0.031280517578125,
0.0256195068359375,
0.0146942138671875,
0.0012960433959960938,
-0.0004925727844238281,
0.01568603515625,
-0.0169677734375,
0.0217437744140625,
0.0188751220703125,
-0.062744140625,
-0.0274810791015625,
-0.053863525390625,
-0.... |
Livyatan/mT5-small-Hebrew-ParaShoot-QA | 2023-08-19T15:06:21.000Z | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2109.11314",
"arxiv:2010.11934",
"arxiv:2212.09682",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Livyatan | null | null | Livyatan/mT5-small-Hebrew-ParaShoot-QA | 0 | 2 | transformers | 2023-08-19T14:04:30 | ---
datasets: https://github.com/omrikeren/ParaShoot/
metrics: f1 49.612 exact_match 26.439
language: he
pipeline_tag: question-answering
license: unknown
---
## mT5-small-Hebrew-ParaShoot-QA
This repository contains a fine-tuned mT5-small (Multilingual Text-to-Text Transfer Transformer) model on the [ParaShoot](https://arxiv.org/pdf/2109.11314.pdf) dataset ([github](https://github.com/omrikeren/ParaShoot/tree/main/data)).
To enhance its performance, a "domain-specific" fine-tuning approach was employed.
Initially, the model was pretrained on a Hebrew dataset to capture Hebrew linguistic nuances.
Subsequently, I further fine-tuned the model on the ParaShoot dataset, aiming to improve its proficiency in the Question-Answering task.
This model builds upon the original work by [imvladikon](https://huggingface.co/imvladikon/) who initially fine-tuned the mT5-small model for the [summarization task](https://huggingface.co/imvladikon/het5_small_summarization).
# Model Details
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the mC4 corpus, covering 101 languages.
Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
# Related papers:
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel
Paper: [Multilingual Sequence-to-Sequence Models for Hebrew NLP](https://arxiv.org/pdf/2212.09682.pdf)
Authors: Matan Eyal, Hila Noga, Roee Aharoni, Idan Szpektor, Reut Tsarfaty
Paper: [PARASHOOT: A Hebrew Question Answering Dataset](https://arxiv.org/pdf/2109.11314.pdf)
Authors: Omri Keren, Omer Levy
# This model achieves the following results on the test set:
- Overall F1: 49.612
- Overall EM: 26.439
- Loss: 1.346
Note: In the paper [Multilingual Sequence-to-Sequence Models for Hebrew NLP](https://arxiv.org/pdf/2212.09682.pdf) the results were F1 - 48.71, EM - 24.52.
# How to use the model:
Use the code below to get started with the model.
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer
MODEL_NAME = "Livyatan/mT5-small-Hebrew-ParaShoot-QA"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
```
```python
def generate_answer(question, context):
input_encoding = tokenizer(
question,
context,
max_length = len(context),
padding="max_length",
truncation="only_second",
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt"
).to(DEVICE)
with torch.no_grad():
generated_ids = model.generate(
input_ids = input_encoding['input_ids'].to(DEVICE),
attention_mask = input_encoding['attention_mask'].to(DEVICE),
max_length=20,
)
preds = [
tokenizer.decode(generated_id, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for generated_id in generated_ids
]
return "".join(preds)
context = 'סדרת הלווייתנאים כוללת כ-90 מינים, שכולם חיים באוקיינוסים מלבד חמישה מיני דולפינים החיים במים מתוקים. הלווייתנאים החיים מחולקים לשתי תת-סדרות: לווייתני מזיפות (Mysticeti) ולווייתני שיניים (Odontoceti; ובהם גם דולפינים); בעבר התקיימה תת-סדרה נוספת: לווייתנים קדומים (Archaeoceti), שנכחדה. במרבית המקרים לווייתני המזיפות גדולים באופן משמעותי מלווייתני השיניים, הקטנים והמהירים יותר, וכמה מלווייתני המזיפות הם מבעלי החיים הגדולים ביותר בכדור הארץ. לווייתני השיניים מתאפיינים בשיניים חדות, והם ציידים מהירים שניזונים מדגים ומיצורים ימיים אחרים. לעומתם לווייתני המזיפות הם חסרי שיניים ובמקום זאת יש להם מזיפות ארוכות דמויות מסננת, שבעזרתן הם מסננים פלנקטון מהמים.'
question = 'כמה מינים כוללת סדרת הלווייתנאים?'
answer = generate_answer(question, context)
print(answer)
>>> 'כ-90 מינים'
```
| 3,845 | [
[
-0.036865234375,
-0.04638671875,
0.0243988037109375,
0.025787353515625,
-0.040008544921875,
-0.0229339599609375,
-0.0081024169921875,
-0.0272369384765625,
0.0026874542236328125,
0.02301025390625,
-0.053497314453125,
-0.043487548828125,
-0.0426025390625,
0.02... |
time-machine/test | 2023-08-20T20:33:03.000Z | [
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | time-machine | null | null | time-machine/test | 0 | 2 | transformers | 2023-08-19T16:16:06 | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1230
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 10.0 | 20 | 0.2306 | 0.0 | 0.0 | 0.0 | 0.9333 |
| No log | 20.0 | 40 | 0.0638 | 0.0 | 0.0 | 0.0 | 0.9778 |
| No log | 30.0 | 60 | 0.0727 | 0.0 | 0.0 | 0.0 | 0.9778 |
| No log | 40.0 | 80 | 0.0940 | 0.0 | 0.0 | 0.0 | 0.9778 |
| No log | 50.0 | 100 | 0.1230 | 0.0 | 0.0 | 0.0 | 0.9778 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,872 | [
[
-0.0266571044921875,
-0.0369873046875,
0.008453369140625,
0.02520751953125,
-0.0159759521484375,
-0.024017333984375,
0.005962371826171875,
-0.007228851318359375,
0.007228851318359375,
0.0201568603515625,
-0.048370361328125,
-0.054412841796875,
-0.038543701171875... |
Triangles/gpt-neo-125m-primordial_earth_full | 2023-09-06T16:02:37.000Z | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | Triangles | null | null | Triangles/gpt-neo-125m-primordial_earth_full | 0 | 2 | transformers | 2023-08-19T19:13:10 | ---
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-primordial_earth_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-primordial_earth_full
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 332 | 3.6724 |
| 3.6848 | 2.0 | 664 | 3.6293 |
| 3.6848 | 3.0 | 996 | 3.6208 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| 1,431 | [
[
-0.03546142578125,
-0.0506591796875,
0.028106689453125,
-0.0013179779052734375,
-0.03515625,
-0.033172607421875,
-0.009246826171875,
-0.01555633544921875,
0.01265716552734375,
0.0240020751953125,
-0.059814453125,
-0.046905517578125,
-0.044403076171875,
-0.00... |
Henil1/mt5-small-hindi-summary-hindi-summary | 2023-08-19T20:23:50.000Z | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Henil1 | null | null | Henil1/mt5-small-hindi-summary-hindi-summary | 0 | 2 | transformers | 2023-08-19T19:59:11 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Henil1/mt5-small-hindi-summary-hindi-summary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Henil1/mt5-small-hindi-summary-hindi-summary
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 13806, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| nan | nan | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,565 | [
[
-0.036773681640625,
-0.03875732421875,
0.0215911865234375,
0.008575439453125,
-0.032989501953125,
-0.0208282470703125,
-0.01123809814453125,
-0.0150146484375,
0.00435638427734375,
0.0028133392333984375,
-0.052703857421875,
-0.043243408203125,
-0.05572509765625,
... |
Faradaylab/aria-doc | 2023-08-19T22:04:13.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"biology",
"legal",
"French",
"France",
"fr",
"en",
"dataset:etalab-ia/piaf",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | Faradaylab | null | null | Faradaylab/aria-doc | 0 | 2 | transformers | 2023-08-19T21:18:14 | ---
license: gpl
datasets:
- etalab-ia/piaf
language:
- fr
- en
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- biology
- legal
- French
- France
---
Aria Doc is a model we created for Documents Q/A. It has been trained on a high quality dataset with over 3000 rows of context,question and answers from Etalab.
You can find the dataset used for training here https://huggingface.co/datasets/etalab-ia/piaf . Aria Doc has a great performance in French related Q/A and data extraction in french language.
-
Aria Doc est un modèle que nous avons créé pour l'extraction de données sur des documents et les besoins de questions/réponses sur des données spécifiques. Le modèle a été entraîné sur plus de 3000 exemples fournis par le Dataset PIAF. Les performances d'Aria Doc sont optimales en Français. | 810 | [
[
-0.031158447265625,
-0.0616455078125,
0.040985107421875,
0.0037174224853515625,
0.0035610198974609375,
-0.0247650146484375,
0.004375457763671875,
-0.0248565673828125,
0.01486968994140625,
0.051055908203125,
-0.044189453125,
-0.054290771484375,
-0.06048583984375,... |
hoangthan/image-classification | 2023-08-21T03:45:31.000Z | [
"keras",
"image-classification",
"en",
"region:us"
] | image-classification | hoangthan | null | null | hoangthan/image-classification | 0 | 2 | keras | 2023-08-20T02:38:32 | ---
library_name: keras
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
---
## Model description
Image Classification for both digital image and sketch.
List object: 'butterfly', 'candle', 'cannon', 'elephant', 'hourglass', 'pizza', 'scorpion', 'snail', 'table', 'teapot'.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from huggingface_hub import from_pretrained_keras
>>> model = from_pretrained_keras("hoangthan/image-classification")
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
## Model Plot
| 709 | [
[
-0.037750244140625,
-0.041107177734375,
-0.0007448196411132812,
0.02203369140625,
-0.041778564453125,
-0.01331329345703125,
0.0242767333984375,
-0.0284576416015625,
0.031585693359375,
0.035430908203125,
-0.028411865234375,
-0.04296875,
-0.06719970703125,
-0.... |
tanishq1508/roberta-finetuned-last-company-last-designation | 2023-08-20T08:04:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | tanishq1508 | null | null | tanishq1508/roberta-finetuned-last-company-last-designation | 0 | 2 | transformers | 2023-08-20T07:36:38 | ---
license: cc-by-4.0
base_model: tanishq1508/roberta-finetuned-last-company
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-last-company-last-designation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-last-company-last-designation
This model is a fine-tuned version of [tanishq1508/roberta-finetuned-last-company](https://huggingface.co/tanishq1508/roberta-finetuned-last-company) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| 1,191 | [
[
-0.0236663818359375,
-0.044403076171875,
0.016632080078125,
0.0004665851593017578,
-0.0321044921875,
-0.0258026123046875,
-0.01280975341796875,
-0.0176849365234375,
0.0086822509765625,
0.043212890625,
-0.052734375,
-0.039947509765625,
-0.043853759765625,
0.0... |
TheBloke/LlongOrca-13B-16K-GGML | 2023-09-27T13:01:41.000Z | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2307.09288",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | TheBloke | null | null | TheBloke/LlongOrca-13B-16K-GGML | 8 | 2 | transformers | 2023-08-20T20:24:06 | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- Open-Orca/OpenOrca
model_name: LlongOrca 13B 16K
inference: false
model_creator: Open-Orca
model_link: https://huggingface.co/Open-Orca/LlongOrca-13B-16k
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
base_model: Open-Orca/LlongOrca-13B-16k
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LlongOrca 13B 16K - GGML
- Model creator: [Open-Orca](https://huggingface.co/Open-Orca)
- Original model: [LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
## Description
This repo contains GGML format model files for [Open-Orca's LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML)
* [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llongorca-13b-16k.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llongorca-13b-16k.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llongorca-13b-16k.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llongorca-13b-16k.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llongorca-13b-16k.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| [llongorca-13b-16k.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llongorca-13b-16k.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llongorca-13b-16k.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llongorca-13b-16k.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llongorca-13b-16k.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llongorca-13b-16k.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llongorca-13b-16k.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llongorca-13b-16k.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llongorca-13b-16k.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llongorca-13b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Open-Orca's LlongOrca 13B 16K
<p><h1>🐋 The Second Llong Context Orca! 🐋</h1></p>

# OpenOrca - LlongOrca - 13B - 16k
We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k).
This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
HF Leaderboard evals place this model as #1 for all 13B long context models at release time.
We achieve >112% the performance of the base LLongMA2-13b-16k model we tuned on top of.
As well, we preserve >98% of the performance of the OpenOrcaxOpenChat-Preview2-13B model we share datasets with, while extending the context to 16k.
We did this training as part of testing setup of our H100 cluster.
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
Many thanks to @EnricoShippole, @theemozilla, and @kaiokendev1 for the fine work on creating the LlongMA-2-13b-16k model this was trained on top of!
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Prompt Template
We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this.
## Example Prompt Exchange
```
<|im_start|>system
You are LlongOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
How are you now?<|im_end|>
```
# Evaluation
We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base long context model.
We reach >112% of LLongMA2-13B-16k performance.
## HuggingFaceH4 Open LLM Leaderboard Performance
We have run our own tests using parameters matching the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) evals.
We preserve >98% of OpenOrcaxOpenChat-Preview2-13B performance and are #1 on the leaderboard for long context 13B models at release time.
We have >103% performance of the next 16k model (vicuna-13b-v1.5-16k).
As well, we expect the context extension techniques from LLongMA to be more robust than other 16k context models available.

## GPT4ALL Leaderboard Performance
We find we score higher than all non-OpenOrca models on the GPT4ALL leaderboard, while preserving ~98.7% of our OpenOrcaxOpenChat-Preview2-13B performance.

# Dataset
We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset.
Further details of our curation practices will be forthcoming with our full model releases.
# Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
We trained with 8x H100 GPUs for 10 hours, completing 4 epochs of full fine tuning on our dataset in one training run.
Commodity cost was ~$300.
# Citation
```bibtex
@software{dale2023llongorca13b,
title = {LlongOrca13B: Llama2-13B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Alpin Dale and Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/LlongOrca-7B-16k},
}
@software{openchat,
title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
doi = {10.5281/zenodo.8105775},
url = {https://github.com/imoneoi/openchat},
version = {pre-release},
year = {2023},
month = {7},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
| 21,818 | [
[
-0.037689208984375,
-0.060821533203125,
0.023712158203125,
0.01904296875,
-0.0275421142578125,
-0.007305145263671875,
-0.00836181640625,
-0.052337646484375,
0.03387451171875,
0.007442474365234375,
-0.04022216796875,
-0.048858642578125,
-0.0357666015625,
0.00... |
2tle/korean-curse-detection | 2023-08-21T01:41:12.000Z | [
"transformers",
"pytorch",
"electra",
"text-classification",
"ko",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2tle | null | null | 2tle/korean-curse-detection | 0 | 2 | transformers | 2023-08-21T01:29:12 | ---
license: mit
language:
- ko
---
# Korean Curse Detection Electra Model
## Dataset
- [2runo/Curse-detection-data](https://github.com/2runo/Curse-detection-data)
## Model Score
- Accuracy: 87%
## Label
- Label 0: No curse detected
- Label 1: Curse detected | 263 | [
[
-0.0093231201171875,
-0.033905029296875,
0.0213165283203125,
0.03759765625,
-0.0292205810546875,
0.016143798828125,
0.01739501953125,
-0.0290069580078125,
0.033477783203125,
0.050079345703125,
-0.01445770263671875,
-0.07611083984375,
-0.031524658203125,
0.00... |
VietnamAIHub/LLaMA2_Vietnamese_Medical_SFT_13B | 2023-09-09T12:45:34.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | VietnamAIHub | null | null | VietnamAIHub/LLaMA2_Vietnamese_Medical_SFT_13B | 0 | 2 | transformers | 2023-08-21T04:43:49 | ---
license: cc-by-nc-nd-4.0
---
# Noted This model will be updated information After the Medical LLM group discussion
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| 5,091 | [
[
-0.044403076171875,
-0.044281005859375,
0.038604736328125,
0.00720977783203125,
-0.026336669921875,
-0.0271148681640625,
0.0092620849609375,
-0.0452880859375,
0.0175933837890625,
0.0504150390625,
-0.0562744140625,
-0.052398681640625,
-0.04388427734375,
-0.00... |
KingKazma/cnn_dailymail_6789_200000_100000_v1_train | 2023-08-21T15:09:56.000Z | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | KingKazma | null | null | KingKazma/cnn_dailymail_6789_200000_100000_v1_train | 0 | 2 | bertopic | 2023-08-21T15:09:54 |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# cnn_dailymail_6789_200000_100000_v1_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_6789_200000_100000_v1_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 1082
* Number of training documents: 200000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - mr - people - police - year | 5 | -1_said_mr_people_police |
| 0 | league - goal - cup - player - club | 104194 | 0_league_goal_cup_player |
| 1 | murder - shooting - police - county - shot | 19115 | 1_murder_shooting_police_county |
| 2 | nfl - nba - quarterback - basketball - game | 3824 | 2_nfl_nba_quarterback_basketball |
| 3 | hospital - doctor - cancer - transplant - baby | 1993 | 3_hospital_doctor_cancer_transplant |
| 4 | murray - tennis - wimbledon - federer - djokovic | 1820 | 4_murray_tennis_wimbledon_federer |
| 5 | ship - boat - cruise - rescue - mountain | 1497 | 5_ship_boat_cruise_rescue |
| 6 | plane - flight - airline - passenger - airport | 1419 | 6_plane_flight_airline_passenger |
| 7 | romney - republican - republicans - democrats - obama | 1397 | 7_romney_republican_republicans_democrats |
| 8 | film - movie - comedy - actor - character | 1279 | 8_film_movie_comedy_actor |
| 9 | space - mars - nasa - planet - earth | 1244 | 9_space_mars_nasa_planet |
| 10 | iran - israeli - israel - palestinian - gaza | 1128 | 10_iran_israeli_israel_palestinian |
| 11 | war - soldier - army - afghanistan - medal | 1096 | 11_war_soldier_army_afghanistan |
| 12 | dog - cat - animal - pet - dogs | 998 | 12_dog_cat_animal_pet |
| 13 | mcilroy - golf - woods - ryder - pga | 979 | 13_mcilroy_golf_woods_ryder |
| 14 | ukraine - russia - putin - russian - ukrainian | 939 | 14_ukraine_russia_putin_russian |
| 15 | medal - gold - olympic - games - olympics | 817 | 15_medal_gold_olympic_games |
| 16 | korea - korean - north - kim - koreas | 784 | 16_korea_korean_north_kim |
| 17 | fashion - dress - collection - style - model | 764 | 17_fashion_dress_collection_style |
| 18 | driver - driving - car - road - crash | 740 | 18_driver_driving_car_road |
| 19 | somalia - alshabaab - sudan - kenya - kenyan | 739 | 19_somalia_alshabaab_sudan_kenya |
| 20 | hamilton - prix - rosberg - f1 - formula | 722 | 20_hamilton_prix_rosberg_f1 |
| 21 | shark - whale - dolphin - fish - sea | 705 | 21_shark_whale_dolphin_fish |
| 22 | property - price - house - estate - buyer | 642 | 22_property_price_house_estate |
| 23 | mayweather - fight - pacquiao - boxing - froch | 630 | 23_mayweather_fight_pacquiao_boxing |
| 24 | stabbed - murder - knife - heard - crown | 627 | 24_stabbed_murder_knife_heard |
| 25 | ebola - virus - liberia - leone - sierra | 609 | 25_ebola_virus_liberia_leone |
| 26 | car - vehicle - cars - electric - motor | 608 | 26_car_vehicle_cars_electric |
| 27 | teacher - school - sex - student - sexual | 584 | 27_teacher_school_sex_student |
| 28 | zoo - elephant - animal - rhino - snake | 578 | 28_zoo_elephant_animal_rhino |
| 29 | painting - art - artist - banksy - artwork | 560 | 29_painting_art_artist_banksy |
| 30 | mexican - cartel - mexico - mexicos - drug | 540 | 30_mexican_cartel_mexico_mexicos |
| 31 | prince - duchess - royal - queen - duke | 529 | 31_prince_duchess_royal_queen |
| 32 | nhs - patient - ae - trust - patients | 517 | 32_nhs_patient_ae_trust |
| 33 | pupil - education - school - ofsted - schools | 512 | 33_pupil_education_school_ofsted |
| 34 | snowden - nsa - intelligence - surveillance - snowdens | 492 | 34_snowden_nsa_intelligence_surveillance |
| 35 | chinese - bo - hong - china - kong | 464 | 35_chinese_bo_hong_china |
| 36 | mcdonalds - chocolate - pizza - food - burger | 462 | 36_mcdonalds_chocolate_pizza_food |
| 37 | album - song - music - band - beatles | 454 | 37_album_song_music_band |
| 38 | fire - blaze - firefighter - smoke - flame | 452 | 38_fire_blaze_firefighter_smoke |
| 39 | pope - vatican - francis - cardinal - benedict | 446 | 39_pope_vatican_francis_cardinal |
| 40 | labour - ukip - miliband - farage - tory | 431 | 40_labour_ukip_miliband_farage |
| 41 | iphone - apple - ipad - samsung - tablet | 430 | 41_iphone_apple_ipad_samsung |
| 42 | chavez - venezuela - venezuelan - maduro - farc | 428 | 42_chavez_venezuela_venezuelan_maduro |
| 43 | afghan - afghanistan - taliban - kabul - karzai | 397 | 43_afghan_afghanistan_taliban_kabul |
| 44 | ancient - archaeologist - tomb - roman - bc | 369 | 44_ancient_archaeologist_tomb_roman |
| 45 | africa - african - africas - continent - malawi | 338 | 45_africa_african_africas_continent |
| 46 | weather - rain - temperature - snow - flood | 329 | 46_weather_rain_temperature_snow |
| 47 | horse - jockey - racing - stakes - race | 329 | 47_horse_jockey_racing_stakes |
| 48 | syrian - syria - damascus - alassad - regime | 318 | 48_syrian_syria_damascus_alassad |
| 49 | novel - book - shades - author - fifty | 302 | 49_novel_book_shades_author |
| 50 | ferguson - wilson - brown - browns - louis | 295 | 50_ferguson_wilson_brown_browns |
| 51 | libya - libyan - gadhafi - tripoli - gadhafis | 292 | 51_libya_libyan_gadhafi_tripoli |
| 52 | weight - stone - diet - size - eating | 286 | 52_weight_stone_diet_size |
| 53 | facebook - user - app - facebooks - users | 281 | 53_facebook_user_app_facebooks |
| 54 | mubarak - egypt - egyptian - brotherhood - egypts | 279 | 54_mubarak_egypt_egyptian_brotherhood |
| 55 | sexual - sex - crown - indecent - girl | 273 | 55_sexual_sex_crown_indecent |
| 56 | fraud - money - jailed - crown - account | 266 | 56_fraud_money_jailed_crown |
| 57 | nazi - hitler - auschwitz - holocaust - jews | 265 | 57_nazi_hitler_auschwitz_holocaust |
| 58 | bank - rbs - banking - bonus - barclays | 255 | 58_bank_rbs_banking_bonus |
| 59 | tsarnaev - tamerlan - boston - dzhokhar - bombing | 253 | 59_tsarnaev_tamerlan_boston_dzhokhar |
| 60 | savile - clifford - paedophile - jimmy - abuse | 253 | 60_savile_clifford_paedophile_jimmy |
| 61 | greece - eurozone - greek - euro - bailout | 247 | 61_greece_eurozone_greek_euro |
| 62 | snow - storm - weather - inch - temperature | 244 | 62_snow_storm_weather_inch |
| 63 | delhi - india - rape - singh - indian | 241 | 63_delhi_india_rape_singh |
| 64 | cancer - patient - prostate - treatment - drug | 234 | 64_cancer_patient_prostate_treatment |
| 65 | haiti - portauprince - haitian - earthquake - haitis | 233 | 65_haiti_portauprince_haitian_earthquake |
| 66 | bbc - presenter - radio - programme - archers | 226 | 66_bbc_presenter_radio_programme |
| 67 | hacking - murdoch - coulson - leveson - brooks | 214 | 67_hacking_murdoch_coulson_leveson |
| 68 | cuba - cuban - castro - havana - fidel | 214 | 68_cuba_cuban_castro_havana |
| 69 | zimmerman - trayvon - zimmermans - martin - sanford | 203 | 69_zimmerman_trayvon_zimmermans_martin |
| 70 | samesex - marriage - gay - supreme - ruling | 202 | 70_samesex_marriage_gay_supreme |
| 71 | bp - oil - spill - gulf - deepwater | 201 | 71_bp_oil_spill_gulf |
| 72 | tobacco - smoking - cigarette - ecigarettes - smoker | 198 | 72_tobacco_smoking_cigarette_ecigarettes |
| 73 | mandela - mandelas - nelson - african - south | 198 | 73_mandela_mandelas_nelson_african |
| 74 | climate - emission - carbon - warming - global | 197 | 74_climate_emission_carbon_warming |
| 75 | turkish - turkey - erdogan - turkeys - istanbul | 196 | 75_turkish_turkey_erdogan_turkeys |
| 76 | console - xbox - gaming - playstation - game | 189 | 76_console_xbox_gaming_playstation |
| 77 | pakistani - pakistan - taliban - pakistans - militant | 184 | 77_pakistani_pakistan_taliban_pakistans |
| 78 | crash - driver - suv - car - truck | 177 | 78_crash_driver_suv_car |
| 79 | armstrong - tour - froome - doping - cavendish | 176 | 79_armstrong_tour_froome_doping |
| 80 | jackson - jacksons - aeg - murray - propofol | 175 | 80_jackson_jacksons_aeg_murray |
| 81 | tsa - airport - screening - security - screener | 173 | 81_tsa_airport_screening_security |
| 82 | pistorius - steenkamp - reeva - oscar - pretoria | 172 | 82_pistorius_steenkamp_reeva_oscar |
| 83 | scotland - salmond - scottish - independence - snp | 172 | 83_scotland_salmond_scottish_independence |
| 84 | fire - wildfire - blaze - firefighter - acre | 170 | 84_fire_wildfire_blaze_firefighter |
| 85 | robbery - thief - gang - cctv - jailed | 166 | 85_robbery_thief_gang_cctv |
| 86 | sex - dating - relationship - partner - men | 166 | 86_sex_dating_relationship_partner |
| 87 | gun - nra - newtown - background - firearm | 165 | 87_gun_nra_newtown_background |
| 88 | earthquake - quake - tsunami - magnitude - geological | 165 | 88_earthquake_quake_tsunami_magnitude |
| 89 | boko - haram - nigeria - nigerian - shekau | 163 | 89_boko_haram_nigeria_nigerian |
| 90 | volcano - lava - eruption - ash - volcanic | 162 | 90_volcano_lava_eruption_ash |
| 91 | glass - google - device - wearable - smartwatch | 161 | 91_glass_google_device_wearable |
| 92 | kennedy - kennedys - oswald - assassination - 1963 | 159 | 92_kennedy_kennedys_oswald_assassination |
| 93 | china - zhang - yue - chinas - chinese | 159 | 93_china_zhang_yue_chinas |
| 94 | pirate - ship - piracy - maersk - somalia | 158 | 94_pirate_ship_piracy_maersk |
| 95 | china - japanese - japan - chinese - japans | 153 | 95_china_japanese_japan_chinese |
| 96 | abbott - gillard - turnbull - minister - rudd | 152 | 96_abbott_gillard_turnbull_minister |
| 97 | hollande - sarkozy - trierweiler - french - francois | 148 | 97_hollande_sarkozy_trierweiler_french |
| 98 | sugar - calorie - diet - obesity - fat | 144 | 98_sugar_calorie_diet_obesity |
| 99 | reactor - fukushima - nuclear - plant - radiation | 141 | 99_reactor_fukushima_nuclear_plant |
| 100 | mugabe - zimbabwe - tsvangirai - mugabes - zimbabwes | 137 | 100_mugabe_zimbabwe_tsvangirai_mugabes |
| 101 | tornado - storm - oklahoma - twister - weather | 137 | 101_tornado_storm_oklahoma_twister |
| 102 | ira - belfast - ireland - sinn - fein | 136 | 102_ira_belfast_ireland_sinn |
| 103 | sony - korea - pascal - north - korean | 133 | 103_sony_korea_pascal_north |
| 104 | password - computer - hacker - malware - cyber | 133 | 104_password_computer_hacker_malware |
| 105 | berlusconi - berlusconis - silvio - italian - bunga | 132 | 105_berlusconi_berlusconis_silvio_italian |
| 106 | eu - brussels - cameron - referendum - european | 130 | 106_eu_brussels_cameron_referendum |
| 107 | bali - sukumaran - chan - indonesian - sandiford | 130 | 107_bali_sukumaran_chan_indonesian |
| 108 | fraternity - phi - campus - kappa - hazing | 127 | 108_fraternity_phi_campus_kappa |
| 109 | malaysia - malaysian - mh370 - search - flight | 125 | 109_malaysia_malaysian_mh370_search |
| 110 | sharrouf - australian - islamic - sydney - syria | 125 | 110_sharrouf_australian_islamic_sydney |
| 111 | yemen - yemeni - sanaa - saleh - yemens | 125 | 111_yemen_yemeni_sanaa_saleh |
| 112 | sandusky - penn - paterno - sanduskys - jerry | 123 | 112_sandusky_penn_paterno_sanduskys |
| 113 | knox - sollecito - kercher - knoxs - meredith | 122 | 113_knox_sollecito_kercher_knoxs |
| 114 | baghdad - iraqi - iraq - shiite - sunni | 121 | 114_baghdad_iraqi_iraq_shiite |
| 115 | hernandez - hernandezs - lloyd - odin - patriots | 121 | 115_hernandez_hernandezs_lloyd_odin |
| 116 | spider - bee - insect - ant - butterfly | 121 | 116_spider_bee_insect_ant |
| 117 | transcript - student - todays - roll - news | 120 | 117_transcript_student_todays_roll |
| 118 | bear - polar - cub - bears - wildlife | 118 | 118_bear_polar_cub_bears |
| 119 | rail - hs2 - fare - train - transport | 117 | 119_rail_hs2_fare_train |
| 120 | bangkok - thaksin - thai - yingluck - thailand | 117 | 120_bangkok_thaksin_thai_yingluck |
| 121 | lottery - jackpot - powerball - ticket - winning | 115 | 121_lottery_jackpot_powerball_ticket |
| 122 | island - beach - resort - hotel - spa | 113 | 122_island_beach_resort_hotel |
| 123 | hiv - aids - virus - hivaids - infection | 111 | 123_hiv_aids_virus_hivaids |
| 124 | secret - gonzalez - agent - service - fence | 109 | 124_secret_gonzalez_agent_service |
| 125 | rio - janeiro - brazilian - brazil - paulo | 108 | 125_rio_janeiro_brazilian_brazil |
| 126 | marijuana - pot - legalization - cannabis - colorado | 108 | 126_marijuana_pot_legalization_cannabis |
| 127 | bird - pigeon - birds - owl - nest | 107 | 127_bird_pigeon_birds_owl |
| 128 | hurricane - storm - tropical - mph - bermuda | 106 | 128_hurricane_storm_tropical_mph |
| 129 | hebdo - charlie - merah - coulibaly - kouachi | 105 | 129_hebdo_charlie_merah_coulibaly |
| 130 | lohan - lindsay - lohans - probation - sautner | 104 | 130_lohan_lindsay_lohans_probation |
| 131 | saudi - arabia - abdullah - arabias - riyadh | 103 | 131_saudi_arabia_abdullah_arabias |
| 132 | transcript - curriculum - todays - feedback - click | 103 | 132_transcript_curriculum_todays_feedback |
| 133 | energy - gas - price - ofgem - bill | 100 | 133_energy_gas_price_ofgem |
| 134 | hasan - hood - fort - hasans - nidal | 98 | 134_hasan_hood_fort_hasans |
| 135 | wine - beer - bottle - champagne - alcohol | 97 | 135_wine_beer_bottle_champagne |
| 136 | eu - migrant - immigration - migration - romanians | 96 | 136_eu_migrant_immigration_migration |
| 137 | irs - lerner - taxexempt - issa - koskinen | 96 | 137_irs_lerner_taxexempt_issa |
| 138 | bbc - patten - bbcs - corporation - lord | 95 | 138_bbc_patten_bbcs_corporation |
| 139 | obama - michelle - malia - obamas - lady | 95 | 139_obama_michelle_malia_obamas |
| 140 | madeleine - mccann - praia - luz - portuguese | 94 | 140_madeleine_mccann_praia_luz |
| 141 | evans - ched - sheffield - massey - oldham | 94 | 141_evans_ched_sheffield_massey |
| 142 | salmonella - outbreak - coli - listeria - food | 93 | 142_salmonella_outbreak_coli_listeria |
| 143 | falklands - falkland - islands - argentina - argentine | 91 | 143_falklands_falkland_islands_argentina |
| 144 | typhoon - philippines - tacloban - haiyan - manila | 90 | 144_typhoon_philippines_tacloban_haiyan |
| 145 | calais - migrant - port - bouchart - lorry | 89 | 145_calais_migrant_port_bouchart |
| 146 | olympic - torch - olympics - games - ceremony | 89 | 146_olympic_torch_olympics_games |
| 147 | immigration - immigrant - undocumented - reform - republicans | 89 | 147_immigration_immigrant_undocumented_reform |
| 148 | monis - siege - cafe - lindt - hostage | 89 | 148_monis_siege_cafe_lindt |
| 149 | cosby - cosbys - drugged - comedian - constand | 88 | 149_cosby_cosbys_drugged_comedian |
| 150 | motogp - lorenzo - rossi - pedrosa - marquez | 87 | 150_motogp_lorenzo_rossi_pedrosa |
| 151 | modi - indias - india - delhi - gandhi | 87 | 151_modi_indias_india_delhi |
| 152 | mansion - property - apartment - estate - leibovitz | 86 | 152_mansion_property_apartment_estate |
| 153 | shopper - shopping - retailer - retail - sale | 85 | 153_shopper_shopping_retailer_retail |
| 154 | tesco - aldi - supermarket - waitrose - sainsburys | 85 | 154_tesco_aldi_supermarket_waitrose |
| 155 | nascar - busch - stewart - ward - race | 85 | 155_nascar_busch_stewart_ward |
| 156 | sri - tamil - lankan - lanka - rajapaksa | 84 | 156_sri_tamil_lankan_lanka |
| 157 | drought - water - spill - reservoir - river | 84 | 157_drought_water_spill_reservoir |
| 158 | lanza - newtown - hook - sandy - elementary | 83 | 158_lanza_newtown_hook_sandy |
| 159 | va - veterans - shinseki - veteran - phoenix | 83 | 159_va_veterans_shinseki_veteran |
| 160 | dinosaur - fossil - skeleton - specie - specimen | 82 | 160_dinosaur_fossil_skeleton_specie |
| 161 | adoption - peaches - davion - geldof - adopted | 82 | 161_adoption_peaches_davion_geldof |
| 162 | kanye - kardashian - kim - kris - wedding | 80 | 162_kanye_kardashian_kim_kris |
| 163 | panda - tian - zoo - cub - pandas | 79 | 163_panda_tian_zoo_cub |
| 164 | cannabis - plant - marijuana - drug - factory | 79 | 164_cannabis_plant_marijuana_drug |
| 165 | alcohol - drinking - drink - liver - binge | 78 | 165_alcohol_drinking_drink_liver |
| 166 | sexual - sinclair - assault - military - lackland | 78 | 166_sexual_sinclair_assault_military |
| 167 | occupy - protester - zuccotti - demonstrator - wall | 77 | 167_occupy_protester_zuccotti_demonstrator |
| 168 | name - names - baby - messiah - naming | 77 | 168_name_names_baby_messiah |
| 169 | rihanna - brown - drake - probation - browns | 77 | 169_rihanna_brown_drake_probation |
| 170 | strausskahn - diallo - dominique - strausskahns - imf | 76 | 170_strausskahn_diallo_dominique_strausskahns |
| 171 | letizia - felipe - spanish - king - queen | 76 | 171_letizia_felipe_spanish_king |
| 172 | toyota - gm - recall - ignition - nhtsa | 76 | 172_toyota_gm_recall_ignition |
| 173 | train - railroad - metronorth - derailment - derailed | 76 | 173_train_railroad_metronorth_derailment |
| 174 | bergdahl - bergdahls - bowe - taliban - army | 76 | 174_bergdahl_bergdahls_bowe_taliban |
| 175 | tattoo - tattooed - tattoos - ink - tattooing | 75 | 175_tattoo_tattooed_tattoos_ink |
| 176 | veil - niqab - muslim - burka - ban | 74 | 176_veil_niqab_muslim_burka |
| 177 | flu - h1n1 - swine - vaccine - virus | 74 | 177_flu_h1n1_swine_vaccine |
| 178 | measles - vaccine - mmr - vaccinated - vaccination | 74 | 178_measles_vaccine_mmr_vaccinated |
| 179 | tax - osborne - chancellor - income - labour | 74 | 179_tax_osborne_chancellor_income |
| 180 | anthony - caylee - anthonys - casey - baez | 74 | 180_anthony_caylee_anthonys_casey |
| 181 | gay - sochi - russian - russia - propaganda | 74 | 181_gay_sochi_russian_russia |
| 182 | economy - growth - osborne - recession - chancellor | 73 | 182_economy_growth_osborne_recession |
| 183 | philippines - philippine - ampatuan - aquino - maguindanao | 73 | 183_philippines_philippine_ampatuan_aquino |
| 184 | weiner - spitzer - weiners - abedin - huma | 72 | 184_weiner_spitzer_weiners_abedin |
| 185 | bosnian - mladic - serb - srebrenica - yugoslavia | 71 | 185_bosnian_mladic_serb_srebrenica |
| 186 | mh17 - malaysia - ukraine - airlines - crash | 71 | 186_mh17_malaysia_ukraine_airlines |
| 187 | wealth - richest - gates - billion - billionaire | 70 | 187_wealth_richest_gates_billion |
| 188 | eta - basque - spanish - spain - spains | 70 | 188_eta_basque_spanish_spain |
| 189 | wars - star - abrams - vii - film | 70 | 189_wars_star_abrams_vii |
| 190 | sleep - brain - neuron - stimulation - study | 69 | 190_sleep_brain_neuron_stimulation |
| 191 | lottery - euromillions - jackpot - bayford - ticket | 69 | 191_lottery_euromillions_jackpot_bayford |
| 192 | miner - mine - coal - miners - mining | 68 | 192_miner_mine_coal_miners |
| 193 | ivf - fertility - embryo - womb - egg | 68 | 193_ivf_fertility_embryo_womb |
| 194 | 911 - memorial - museum - trade - towers | 68 | 194_911_memorial_museum_trade |
| 195 | border - unaccompanied - immigrant - patrol - immigration | 68 | 195_border_unaccompanied_immigrant_patrol |
| 196 | ford - toronto - mayor - rob - crack | 67 | 196_ford_toronto_mayor_rob |
| 197 | neanderthals - neanderthal - homo - modern - ancestor | 67 | 197_neanderthals_neanderthal_homo_modern |
| 198 | jeffs - flds - sect - ranch - polygamy | 66 | 198_jeffs_flds_sect_ranch |
| 199 | sandy - storm - superstorm - hurricane - fema | 66 | 199_sandy_storm_superstorm_hurricane |
| 200 | obesity - obese - overweight - bmi - weight | 66 | 200_obesity_obese_overweight_bmi |
| 201 | dewani - anni - shrien - dewanis - tongo | 66 | 201_dewani_anni_shrien_dewanis |
| 202 | robot - robots - robotics - pepper - humanoid | 65 | 202_robot_robots_robotics_pepper |
| 203 | cia - interrogation - torture - cheney - intelligence | 64 | 203_cia_interrogation_torture_cheney |
| 204 | euthanasia - assisted - suicide - terminally - die | 64 | 204_euthanasia_assisted_suicide_terminally |
| 205 | seeker - asylum - detention - refugee - manus | 64 | 205_seeker_asylum_detention_refugee |
| 206 | assange - wikileaks - embassy - ecuadorian - assanges | 64 | 206_assange_wikileaks_embassy_ecuadorian |
| 207 | fracking - shale - gas - drilling - balcombe | 63 | 207_fracking_shale_gas_drilling |
| 208 | ferry - sewol - ship - korean - sinking | 63 | 208_ferry_sewol_ship_korean |
| 209 | tree - christmas - fir - festive - decoration | 63 | 209_tree_christmas_fir_festive |
| 210 | wave - surfer - surfing - surf - swell | 62 | 210_wave_surfer_surfing_surf |
| 211 | deportation - deported - foreign - criminal - deport | 62 | 211_deportation_deported_foreign_criminal |
| 212 | mosquito - dengue - virus - nile - chikungunya | 62 | 212_mosquito_dengue_virus_nile |
| 213 | sloot - der - flores - holloway - van | 62 | 213_sloot_der_flores_holloway |
| 214 | boston - bauman - marathon - leg - celeste | 62 | 214_boston_bauman_marathon_leg |
| 215 | sotomayor - ginsburg - supreme - sotomayors - justice | 62 | 215_sotomayor_ginsburg_supreme_sotomayors |
| 216 | bulger - bulgers - flemmi - whitey - greig | 61 | 216_bulger_bulgers_flemmi_whitey |
| 217 | castro - dejesus - knight - ariel - berry | 61 | 217_castro_dejesus_knight_ariel |
| 218 | parking - warden - council - ticket - fine | 61 | 218_parking_warden_council_ticket |
| 219 | mafia - rancadore - ndrangheta - italian - italy | 61 | 219_mafia_rancadore_ndrangheta_italian |
| 220 | aid - dfid - 07 - spending - development | 60 | 220_aid_dfid_07_spending |
| 221 | tip - paler - buzi - waitress - server | 60 | 221_tip_paler_buzi_waitress |
| 222 | drug - ecstasy - methadone - drugs - mdma | 60 | 222_drug_ecstasy_methadone_drugs |
| 223 | migrant - lampedusa - mediterranean - boat - italian | 60 | 223_migrant_lampedusa_mediterranean_boat |
| 224 | suu - kyi - myanmar - aung - burma | 59 | 224_suu_kyi_myanmar_aung |
| 225 | bhutto - musharraf - pakistan - pakistans - benazir | 59 | 225_bhutto_musharraf_pakistan_pakistans |
| 226 | hindu - diwali - india - festival - delhi | 59 | 226_hindu_diwali_india_festival |
| 227 | pollution - smog - air - beijing - ozone | 59 | 227_pollution_smog_air_beijing |
| 228 | submarine - hms - ship - navy - hunley | 59 | 228_submarine_hms_ship_navy |
| 229 | wedding - bride - betar - groom - couple | 58 | 229_wedding_bride_betar_groom |
| 230 | drone - unmanned - drones - faa - aircraft | 58 | 230_drone_unmanned_drones_faa |
| 231 | childcare - benefit - income - child - tax | 58 | 231_childcare_benefit_income_child |
| 232 | saatchi - nigella - lawson - grillo - lawsons | 58 | 232_saatchi_nigella_lawson_grillo |
| 233 | abortion - clinic - parenthood - texas - antiabortion | 57 | 233_abortion_clinic_parenthood_texas |
| 234 | zara - tindall - phillips - eventing - equestrian | 57 | 234_zara_tindall_phillips_eventing |
| 235 | hockey - nhl - kings - vancouver - bruins | 57 | 235_hockey_nhl_kings_vancouver |
| 236 | balloon - heene - trappe - falcon - mayumi | 56 | 236_balloon_heene_trappe_falcon |
| 237 | ice - antarctic - greenland - glacier - sheet | 56 | 237_ice_antarctic_greenland_glacier |
| 238 | yacht - superyacht - superyachts - vessel - boat | 56 | 238_yacht_superyacht_superyachts_vessel |
| 239 | harris - rolf - bindi - indecent - alwen | 56 | 239_harris_rolf_bindi_indecent |
| 240 | archbishop - church - welby - bishop - marriage | 55 | 240_archbishop_church_welby_bishop |
| 241 | moussaoui - ghaith - zazi - qaeda - al | 55 | 241_moussaoui_ghaith_zazi_qaeda |
| 242 | woman - gap - women - gender - sandberg | 55 | 242_woman_gap_women_gender |
| 243 | isis - syria - jihadi - henning - islamic | 54 | 243_isis_syria_jihadi_henning |
| 244 | etan - patz - aron - graham - hernandez | 54 | 244_etan_patz_aron_graham |
| 245 | adoption - russian - shatto - adopted - adoptive | 54 | 245_adoption_russian_shatto_adopted |
| 246 | sunscreen - skin - tanning - sunbeds - melanoma | 54 | 246_sunscreen_skin_tanning_sunbeds |
| 247 | newsnight - savile - bbc - mcalpine - entwistle | 54 | 247_newsnight_savile_bbc_mcalpine |
| 248 | sherlock - thrones - cumberbatch - benedict - conan | 53 | 248_sherlock_thrones_cumberbatch_benedict |
| 249 | hillsborough - 96 - disaster - liverpool - 1989 | 52 | 249_hillsborough_96_disaster_liverpool |
| 250 | poppy - legion - ceramic - tower - poppies | 52 | 250_poppy_legion_ceramic_tower |
| 251 | mitchell - plebgate - plebs - rennard - downing | 52 | 251_mitchell_plebgate_plebs_rennard |
| 252 | province - lorry - china - li - car | 52 | 252_province_lorry_china_li |
| 253 | foxconn - factory - apple - worker - starnes | 52 | 253_foxconn_factory_apple_worker |
| 254 | porn - pornography - internet - filter - google | 51 | 254_porn_pornography_internet_filter |
| 255 | holmes - theater - aurora - colorado - shooting | 51 | 255_holmes_theater_aurora_colorado |
| 256 | king - luther - civil - selma - kings | 50 | 256_king_luther_civil_selma |
| 257 | edwards - hunter - rielle - cate - quinn | 50 | 257_edwards_hunter_rielle_cate |
| 258 | boles - planning - housing - countryside - development | 50 | 258_boles_planning_housing_countryside |
| 259 | bull - gored - pamplona - bullfighting - bullfight | 50 | 259_bull_gored_pamplona_bullfighting |
| 260 | uber - lyft - taxi - driver - ubers | 50 | 260_uber_lyft_taxi_driver |
| 261 | music - spotify - beats - itunes - streaming | 49 | 261_music_spotify_beats_itunes |
| 262 | scouts - scout - scouting - bsa - gay | 49 | 262_scouts_scout_scouting_bsa |
| 263 | epstein - epsteins - roberts - andrew - prince | 48 | 263_epstein_epsteins_roberts_andrew |
| 264 | missing - nida - disappearance - search - corfe | 47 | 264_missing_nida_disappearance_search |
| 265 | giffords - loughner - tucson - gabrielle - arizona | 47 | 265_giffords_loughner_tucson_gabrielle |
| 266 | alhilli - saad - maillaud - annecy - mollier | 47 | 266_alhilli_saad_maillaud_annecy |
| 267 | isis - syria - iraq - islamic - fighter | 47 | 267_isis_syria_iraq_islamic |
| 268 | dubai - mme - uae - defterios - sheikha | 46 | 268_dubai_mme_uae_defterios |
| 269 | destination - hotel - city - top - ranked | 46 | 269_destination_hotel_city_top |
| 270 | sochi - olympic - olympics - games - ioc | 46 | 270_sochi_olympic_olympics_games |
| 271 | hair - beard - moustache - mustache - facial | 46 | 271_hair_beard_moustache_mustache |
| 272 | driving - drinkdriving - alcohol - breath - limit | 45 | 272_driving_drinkdriving_alcohol_breath |
| 273 | frog - specie - amphibian - frogs - salamander | 45 | 273_frog_specie_amphibian_frogs |
| 274 | blasio - mayor - koch - de - bloomberg | 45 | 274_blasio_mayor_koch_de |
| 275 | adebolajo - rigby - adebowale - woolwich - drummer | 45 | 275_adebolajo_rigby_adebowale_woolwich |
| 276 | abdulmutallab - explosive - yemen - farouk - detonate | 45 | 276_abdulmutallab_explosive_yemen_farouk |
| 277 | petraeus - kelley - broadwell - paula - kelleys | 45 | 277_petraeus_kelley_broadwell_paula |
| 278 | antibiotic - bacteria - infection - mrsa - antibiotics | 45 | 278_antibiotic_bacteria_infection_mrsa |
| 279 | 3d - printer - printing - printed - print | 45 | 279_3d_printer_printing_printed |
| 280 | sham - marriage - immigration - bride - wedding | 45 | 280_sham_marriage_immigration_bride |
| 281 | benghazi - clinton - cia - attack - libya | 44 | 281_benghazi_clinton_cia_attack |
| 282 | canal - treasure - ship - venice - laquila | 44 | 282_canal_treasure_ship_venice |
| 283 | postal - mail - stamp - delivery - royal | 44 | 283_postal_mail_stamp_delivery |
| 284 | allergy - pollen - allergic - peanut - allergies | 44 | 284_allergy_pollen_allergic_peanut |
| 285 | xinhua - earthquake - sichuan - quake - province | 44 | 285_xinhua_earthquake_sichuan_quake |
| 286 | khmer - rouge - cambodia - cambodian - phnom | 44 | 286_khmer_rouge_cambodia_cambodian |
| 287 | thatcher - thatchers - funeral - margaret - baroness | 44 | 287_thatcher_thatchers_funeral_margaret |
| 288 | iii - richard - leicester - king - iiis | 43 | 288_iii_richard_leicester_king |
| 289 | mosque - muslims - muslim - islamic - islam | 43 | 289_mosque_muslims_muslim_islamic |
| 290 | airbus - 447 - bea - france - air | 43 | 290_airbus_447_bea_france |
| 291 | jobs - apple - steve - cook - apples | 43 | 291_jobs_apple_steve_cook |
| 292 | pipeline - keystone - xl - oil - transcanada | 43 | 292_pipeline_keystone_xl_oil |
| 293 | benefit - welfare - duncan - pensions - claimant | 43 | 293_benefit_welfare_duncan_pensions |
| 294 | malala - malalas - taliban - pakistan - education | 43 | 294_malala_malalas_taliban_pakistan |
| 295 | ufo - object - alien - ufos - sighting | 43 | 295_ufo_object_alien_ufos |
| 296 | goto - yukawa - dhaka - bangladesh - japanese | 43 | 296_goto_yukawa_dhaka_bangladesh |
| 297 | black - obama - romney - racial - cain | 42 | 297_black_obama_romney_racial |
| 298 | texting - driving - phone - driver - distracted | 42 | 298_texting_driving_phone_driver |
| 299 | manning - mannings - wikileaks - coombs - lamo | 41 | 299_manning_mannings_wikileaks_coombs |
| 300 | factory - garment - bangladesh - dhaka - rana | 41 | 300_factory_garment_bangladesh_dhaka |
| 301 | bank - robbery - teller - robber - holmes | 41 | 301_bank_robbery_teller_robber |
| 302 | tibetan - tibet - dalai - tibetans - lama | 41 | 302_tibetan_tibet_dalai_tibetans |
| 303 | census - hispanic - population - latino - hispanics | 41 | 303_census_hispanic_population_latino |
| 304 | routh - kyle - littlefield - kyles - rouths | 41 | 304_routh_kyle_littlefield_kyles |
| 305 | rodas - walker - porsche - walkers - gt | 41 | 305_rodas_walker_porsche_walkers |
| 306 | happiness - wellbeing - oecd - coin - index | 41 | 306_happiness_wellbeing_oecd_coin |
| 307 | iraq - isis - troop - iraqi - combat | 41 | 307_iraq_isis_troop_iraqi |
| 308 | tower - skyscraper - building - tallest - burj | 41 | 308_tower_skyscraper_building_tallest |
| 309 | saldanha - greig - jacintha - prank - saldanhas | 41 | 309_saldanha_greig_jacintha_prank |
| 310 | madoff - madoffs - ponzi - bernie - dipascali | 40 | 310_madoff_madoffs_ponzi_bernie |
| 311 | bales - gibbs - morlock - afghan - winfield | 40 | 311_bales_gibbs_morlock_afghan |
| 312 | chemical - syria - syrian - weapon - syrias | 40 | 312_chemical_syria_syrian_weapon |
| 313 | toy - bionic - magnet - buckyballs - exoskeleton | 40 | 313_toy_bionic_magnet_buckyballs |
| 314 | china - xi - chinese - chinas - beijing | 40 | 314_china_xi_chinese_chinas |
| 315 | 4g - att - verizon - network - wireless | 40 | 315_4g_att_verizon_network |
| 316 | garrido - dugard - garridos - jaycee - dugards | 40 | 316_garrido_dugard_garridos_jaycee |
| 317 | christie - wildstein - christies - jersey - governor | 40 | 317_christie_wildstein_christies_jersey |
| 318 | cowell - factor - talent - simon - audition | 40 | 318_cowell_factor_talent_simon |
| 319 | bieber - biebers - justin - singer - miami | 40 | 319_bieber_biebers_justin_singer |
| 320 | pageant - miss - universe - contestant - beauty | 40 | 320_pageant_miss_universe_contestant |
| 321 | cyclone - kashmir - flooding - srinagar - andhra | 40 | 321_cyclone_kashmir_flooding_srinagar |
| 322 | bus - crash - highway - accident - driver | 39 | 322_bus_crash_highway_accident |
| 323 | breastfeeding - breastfeed - feeding - baby - kaleena | 39 | 323_breastfeeding_breastfeed_feeding_baby |
| 324 | repeal - gay - military - lesbian - openly | 39 | 324_repeal_gay_military_lesbian |
| 325 | fgm - mutilation - genital - dharmasena - female | 39 | 325_fgm_mutilation_genital_dharmasena |
| 326 | card - breach - credit - debit - data | 39 | 326_card_breach_credit_debit |
| 327 | ring - engagement - tovin - wedding - diamond | 39 | 327_ring_engagement_tovin_wedding |
| 328 | marathon - boston - runner - race - runners | 38 | 328_marathon_boston_runner_race |
| 329 | russian - airspace - raf - bomber - aircraft | 38 | 329_russian_airspace_raf_bomber |
| 330 | ghost - haunted - ghostly - paranormal - spooky | 38 | 330_ghost_haunted_ghostly_paranormal |
| 331 | tsunami - japan - sendai - earthquake - fukushima | 38 | 331_tsunami_japan_sendai_earthquake |
| 332 | ecclestone - gribkowsky - bernie - ecclestones - formula | 38 | 332_ecclestone_gribkowsky_bernie_ecclestones |
| 333 | turbine - wind - farm - energy - onshore | 38 | 333_turbine_wind_farm_energy |
| 334 | hazing - famu - band - champion - marching | 38 | 334_hazing_famu_band_champion |
| 335 | fertilizer - explosion - plant - ammonium - nitrate | 38 | 335_fertilizer_explosion_plant_ammonium |
| 336 | selfie - selfies - cornellier - photo - dictionaries | 38 | 336_selfie_selfies_cornellier_photo |
| 337 | manson - tate - atkins - parole - mansons | 38 | 337_manson_tate_atkins_parole |
| 338 | mushroom - ash - dieback - tree - fungus | 38 | 338_mushroom_ash_dieback_tree |
| 339 | petrol - litre - price - fuel - diesel | 37 | 339_petrol_litre_price_fuel |
| 340 | mortgage - rate - carney - inflation - bank | 37 | 340_mortgage_rate_carney_inflation |
| 341 | simpson - oj - goldman - simpsons - nicole | 37 | 341_simpson_oj_goldman_simpsons |
| 342 | bali - indonesia - jakarta - indonesian - jemaah | 37 | 342_bali_indonesia_jakarta_indonesian |
| 343 | capaldi - doctor - clara - episode - moffat | 36 | 343_capaldi_doctor_clara_episode |
| 344 | breivik - utoya - oslo - norway - breiviks | 36 | 344_breivik_utoya_oslo_norway |
| 345 | koppenhaver - mack - wwe - wrestling - wrestler | 36 | 345_koppenhaver_mack_wwe_wrestling |
| 346 | gascoigne - gazza - gascoignes - rehab - poole | 36 | 346_gascoigne_gazza_gascoignes_rehab |
| 347 | film - fu - kung - lee - hong | 36 | 347_film_fu_kung_lee |
| 348 | strasbourg - rights - echr - human - grayling | 36 | 348_strasbourg_rights_echr_human |
| 349 | pakistan - flood - flooding - sindh - relief | 36 | 349_pakistan_flood_flooding_sindh |
| 350 | meat - beef - horsemeat - horse - food | 36 | 350_meat_beef_horsemeat_horse |
| 351 | poker - ivey - casino - crockfords - card | 36 | 351_poker_ivey_casino_crockfords |
| 352 | mh17 - ukraine - ukrainian - missile - buk | 36 | 352_mh17_ukraine_ukrainian_missile |
| 353 | sanitation - trafigura - water - toilet - ewaste | 36 | 353_sanitation_trafigura_water_toilet |
| 354 | lightning - queensland - meteorology - storm - cyclone | 36 | 354_lightning_queensland_meteorology_storm |
| 355 | mumbai - kasab - india - taj - indian | 35 | 355_mumbai_kasab_india_taj |
| 356 | santos - samudio - bruno - samudios - souza | 35 | 356_santos_samudio_bruno_samudios |
| 357 | airline - faa - electronic - device - flight | 35 | 357_airline_faa_electronic_device |
| 358 | bahrain - bahrains - bahraini - rajab - saudi | 35 | 358_bahrain_bahrains_bahraini_rajab |
| 359 | arizona - immigration - arizonas - law - brewer | 35 | 359_arizona_immigration_arizonas_law |
| 360 | fox - cat - hog - wolf - animal | 35 | 360_fox_cat_hog_wolf |
| 361 | agencia - brasil - rio - janeiro - teresopolis | 35 | 361_agencia_brasil_rio_janeiro |
| 362 | kasem - kerri - kasems - jean - casey | 35 | 362_kasem_kerri_kasems_jean |
| 363 | osullivan - snooker - trump - ronnie - frame | 35 | 363_osullivan_snooker_trump_ronnie |
| 364 | frein - trooper - bivens - pennsylvania - dickson | 35 | 364_frein_trooper_bivens_pennsylvania |
| 365 | harry - prince - apache - afghanistan - helicopter | 35 | 365_harry_prince_apache_afghanistan |
| 366 | filin - bolshoi - ballet - dmitrichenko - dancer | 35 | 366_filin_bolshoi_ballet_dmitrichenko |
| 367 | ricin - dutschke - curtis - letter - bloomberg | 34 | 367_ricin_dutschke_curtis_letter |
| 368 | chechen - caucasus - dagestan - umarov - chechnya | 34 | 368_chechen_caucasus_dagestan_umarov |
| 369 | canadian - ottawa - vickers - zehafbibeau - parliament | 34 | 369_canadian_ottawa_vickers_zehafbibeau |
| 370 | gibraltar - spanish - spain - gibraltars - picardo | 34 | 370_gibraltar_spanish_spain_gibraltars |
| 371 | marriage - gay - samesex - tory - partnership | 34 | 371_marriage_gay_samesex_tory |
| 372 | china - chinas - economy - growth - chinese | 34 | 372_china_chinas_economy_growth |
| 373 | bikers - lien - mieses - biker - suv | 34 | 373_bikers_lien_mieses_biker |
| 374 | enterovirus - evd68 - d68 - virus - respiratory | 34 | 374_enterovirus_evd68_d68_virus |
| 375 | sikh - sikhs - singh - temple - kaleka | 34 | 375_sikh_sikhs_singh_temple |
| 376 | education - rhee - teacher - teachers - schools | 34 | 376_education_rhee_teacher_teachers |
| 377 | romanian - romanians - romania - roma - arch | 34 | 377_romanian_romanians_romania_roma |
| 378 | cannabis - marijuana - synthetic - k2 - drug | 34 | 378_cannabis_marijuana_synthetic_k2 |
| 379 | witheridge - thai - koh - tao - zaw | 33 | 379_witheridge_thai_koh_tao |
| 380 | chickfila - gay - cathy - therapy - rekers | 33 | 380_chickfila_gay_cathy_therapy |
| 381 | compounding - meningitis - fungal - necc - steroid | 33 | 381_compounding_meningitis_fungal_necc |
| 382 | marathon - runner - badwater - scotlandwilliams - baluchi | 33 | 382_marathon_runner_badwater_scotlandwilliams |
| 383 | rohingya - myanmar - rakhine - buddhists - buddhist | 33 | 383_rohingya_myanmar_rakhine_buddhists |
| 384 | boo - mcdaniel - mama - alana - honey | 32 | 384_boo_mcdaniel_mama_alana |
| 385 | lusty - taubman - lawsuit - rogers - hotton | 32 | 385_lusty_taubman_lawsuit_rogers |
| 386 | tax - hmrc - starbucks - avoidance - hodge | 32 | 386_tax_hmrc_starbucks_avoidance |
| 387 | sheen - sheens - charlie - brooke - mueller | 32 | 387_sheen_sheens_charlie_brooke |
| 388 | chaney - 4chan - nude - hacker - celebrity | 31 | 388_chaney_4chan_nude_hacker |
| 389 | blair - chilcot - inquiry - iraq - sir | 31 | 389_blair_chilcot_inquiry_iraq |
| 390 | diamond - heist - cannes - jewel - jewelry | 31 | 390_diamond_heist_cannes_jewel |
| 391 | whaling - whale - shepherd - japanese - maru | 31 | 391_whaling_whale_shepherd_japanese |
| 392 | miss - pageant - beauty - contestant - universe | 31 | 392_miss_pageant_beauty_contestant |
| 393 | scientology - miscavige - scientologists - church - org | 31 | 393_scientology_miscavige_scientologists_church |
| 394 | botox - skin - filler - lip - cosmetic | 31 | 394_botox_skin_filler_lip |
| 395 | payday - wonga - loan - lender - fca | 31 | 395_payday_wonga_loan_lender |
| 396 | apple - ebooks - kindle - ebook - publisher | 31 | 396_apple_ebooks_kindle_ebook |
| 397 | ukba - immigration - border - asylum - backlog | 31 | 397_ukba_immigration_border_asylum |
| 398 | cyber - stuxnet - computer - hacker - kaspersky | 31 | 398_cyber_stuxnet_computer_hacker |
| 399 | heroes - ireport - journalism - cnn - allstar | 31 | 399_heroes_ireport_journalism_cnn |
| 400 | priest - lynn - archdiocese - philadelphia - monsignor | 30 | 400_priest_lynn_archdiocese_philadelphia |
| 401 | travolta - travoltas - jett - okorocha - kawasaki | 30 | 401_travolta_travoltas_jett_okorocha |
| 402 | chlamydia - stis - gonorrhea - sti - gonorrhoea | 30 | 402_chlamydia_stis_gonorrhea_sti |
| 403 | bake - ruby - berry - tandoh - hollywood | 30 | 403_bake_ruby_berry_tandoh |
| 404 | mers - sars - virus - coronavirus - respiratory | 30 | 404_mers_sars_virus_coronavirus |
| 405 | kashmir - srinagar - indian - india - pakistan | 30 | 405_kashmir_srinagar_indian_india |
| 406 | princess - charlene - prince - madeleine - royal | 30 | 406_princess_charlene_prince_madeleine |
| 407 | bangkok - flood - thailand - flooding - thai | 30 | 407_bangkok_flood_thailand_flooding |
| 408 | dee - benefits - channel - street - turner | 30 | 408_dee_benefits_channel_street |
| 409 | litvinenko - berezovsky - litvinenkos - russian - kgb | 30 | 409_litvinenko_berezovsky_litvinenkos_russian |
| 410 | bobbi - kristina - gordon - whitney - houston | 30 | 410_bobbi_kristina_gordon_whitney |
| 411 | canyon - park - yosemite - national - rim | 30 | 411_canyon_park_yosemite_national |
| 412 | philpott - mairead - mick - mosley - philpotts | 30 | 412_philpott_mairead_mick_mosley |
| 413 | gun - 3d - printer - liberator - bullet | 30 | 413_gun_3d_printer_liberator |
| 414 | lundberg - oil - gas - gallon - price | 30 | 414_lundberg_oil_gas_gallon |
| 415 | eye - lens - vision - cornea - glaucoma | 29 | 415_eye_lens_vision_cornea |
| 416 | divorce - youngs - scot - reno - young | 29 | 416_divorce_youngs_scot_reno |
| 417 | robertson - duck - dynasty - ae - phil | 29 | 417_robertson_duck_dynasty_ae |
| 418 | samesex - marriage - legalize - gay - bill | 29 | 418_samesex_marriage_legalize_gay |
| 419 | fingerprint - password - sensor - unlock - apple | 29 | 419_fingerprint_password_sensor_unlock |
| 420 | cruise - norovirus - ship - passenger - outbreak | 29 | 420_cruise_norovirus_ship_passenger |
| 421 | sao - paulo - brazil - protest - cup | 29 | 421_sao_paulo_brazil_protest |
| 422 | resort - holiday - hotel - ill - hygiene | 29 | 422_resort_holiday_hotel_ill |
| 423 | gm - crop - genetically - modified - farming | 29 | 423_gm_crop_genetically_modified |
| 424 | cocaine - drug - smuggler - coast - guard | 29 | 424_cocaine_drug_smuggler_coast |
| 425 | blagojevich - illinois - burris - governor - senate | 29 | 425_blagojevich_illinois_burris_governor |
| 426 | teeth - dentist - dental - denture - tooth | 29 | 426_teeth_dentist_dental_denture |
| 427 | clarkson - gear - jeremy - bbc - presenter | 29 | 427_clarkson_gear_jeremy_bbc |
| 428 | harbor - pearl - hiroshima - nagasaki - atomic | 29 | 428_harbor_pearl_hiroshima_nagasaki |
| 429 | waste - recycling - ewaste - recycled - tyre | 29 | 429_waste_recycling_ewaste_recycled |
| 430 | teacher - teachers - mcfarland - union - church | 28 | 430_teacher_teachers_mcfarland_union |
| 431 | sinkhole - hole - sinkholes - swallowed - seffner | 28 | 431_sinkhole_hole_sinkholes_swallowed |
| 432 | filner - filners - mayor - diego - harassment | 28 | 432_filner_filners_mayor_diego |
| 433 | cambodia - temple - cambodian - thai - thailand | 28 | 433_cambodia_temple_cambodian_thai |
| 434 | fragrance - perfume - scent - bottle - eau | 28 | 434_fragrance_perfume_scent_bottle |
| 435 | chinese - hacker - hacking - cyber - china | 28 | 435_chinese_hacker_hacking_cyber |
| 436 | dubai - dalelv - mcredmond - blake - acors | 28 | 436_dubai_dalelv_mcredmond_blake |
| 437 | traveller - caravan - travellers - dale - eviction | 28 | 437_traveller_caravan_travellers_dale |
| 438 | khobragade - devyani - housekeeper - indian - immunity | 28 | 438_khobragade_devyani_housekeeper_indian |
| 439 | giordano - gardner - aruba - aruban - robyn | 28 | 439_giordano_gardner_aruba_aruban |
| 440 | fire - bushfires - bushfire - blaze - adelaide | 28 | 440_fire_bushfires_bushfire_blaze |
| 441 | driving - pennant - footballer - speeding - nash | 28 | 441_driving_pennant_footballer_speeding |
| 442 | downton - abbey - lady - grantham - maggie | 28 | 442_downton_abbey_lady_grantham |
| 443 | lodge - sweat - ray - participant - selfhelp | 27 | 443_lodge_sweat_ray_participant |
| 444 | hamza - almasri - hamzas - quin - abu | 27 | 444_hamza_almasri_hamzas_quin |
| 445 | magnotta - montreal - lafreniere - lin - luka | 27 | 445_magnotta_montreal_lafreniere_lin |
| 446 | sniper - kyle - clint - moore - eastwood | 27 | 446_sniper_kyle_clint_moore |
| 447 | askfm - hannah - bullying - troll - cyberbullying | 27 | 447_askfm_hannah_bullying_troll |
| 448 | google - privacy - googles - gmail - user | 27 | 448_google_privacy_googles_gmail |
| 449 | icebreaker - ice - shokalskiy - akademik - ship | 27 | 449_icebreaker_ice_shokalskiy_akademik |
| 450 | michaella - melissa - mccollum - reid - lima | 27 | 450_michaella_melissa_mccollum_reid |
| 451 | gandolfini - sopranos - gandolfinis - soprano - kobold | 26 | 451_gandolfini_sopranos_gandolfinis_soprano |
| 452 | oldest - imich - okawa - kimura - stoehr | 26 | 452_oldest_imich_okawa_kimura |
| 453 | dress - kate - duchess - pearl - worn | 26 | 453_dress_kate_duchess_pearl |
| 454 | reef - coral - reefs - marine - seaview | 26 | 454_reef_coral_reefs_marine |
| 455 | veronica - capobiancos - adoption - dusten - capobianco | 26 | 455_veronica_capobiancos_adoption_dusten |
| 456 | turing - bletchley - enigma - turings - code | 26 | 456_turing_bletchley_enigma_turings |
| 457 | facebook - social - study - happiness - hedonometer | 26 | 457_facebook_social_study_happiness |
| 458 | solo - solos - stevens - jerramy - soccer | 26 | 458_solo_solos_stevens_jerramy |
| 459 | college - tuition - loan - student - education | 26 | 459_college_tuition_loan_student |
| 460 | inmate - prison - sentencing - mandatory - crack | 26 | 460_inmate_prison_sentencing_mandatory |
| 461 | funeral - alicante - carousel - belt - airport | 26 | 461_funeral_alicante_carousel_belt |
| 462 | constable - policing - buckland - pension - commissioner | 26 | 462_constable_policing_buckland_pension |
| 463 | college - enin - admission - student - sat | 26 | 463_college_enin_admission_student |
| 464 | abortion - termination - bpas - pregnancy - doogan | 26 | 464_abortion_termination_bpas_pregnancy |
| 465 | alzheimers - brain - dementia - disease - cognitive | 26 | 465_alzheimers_brain_dementia_disease |
| 466 | extradition - mckinnon - dunham - mckinnons - extradited | 26 | 466_extradition_mckinnon_dunham_mckinnons |
| 467 | crocodile - reptile - crocodiles - croc - saltwater | 26 | 467_crocodile_reptile_crocodiles_croc |
| 468 | circumcision - circumcised - foreskin - herpes - uncircumcised | 25 | 468_circumcision_circumcised_foreskin_herpes |
| 469 | g4s - buckles - olympic - olympics - games | 25 | 469_g4s_buckles_olympic_olympics |
| 470 | charger - iphone - apple - phone - battery | 25 | 470_charger_iphone_apple_phone |
| 471 | missile - nuclear - air - force - minuteman | 25 | 471_missile_nuclear_air_force |
| 472 | food - hunger - rice - hungry - undernourished | 25 | 472_food_hunger_rice_hungry |
| 473 | lowndes - kendrick - johnsons - mat - gym | 25 | 473_lowndes_kendrick_johnsons_mat |
| 474 | ashya - proton - ashyas - prague - therapy | 25 | 474_ashya_proton_ashyas_prague |
| 475 | disney - disneyland - walt - park - theme | 25 | 475_disney_disneyland_walt_park |
| 476 | bleach - kaur - harding - dickeys - drank | 25 | 476_bleach_kaur_harding_dickeys |
| 477 | gosnell - gosnells - clinic - abortion - philadelphia | 25 | 477_gosnell_gosnells_clinic_abortion |
| 478 | charlottesville - matthew - hannah - virginia - harrington | 25 | 478_charlottesville_matthew_hannah_virginia |
| 479 | ugandan - uganda - homosexuality - gay - homosexual | 25 | 479_ugandan_uganda_homosexuality_gay |
| 480 | deen - deens - paula - bubba - paculis | 25 | 480_deen_deens_paula_bubba |
| 481 | penguin - penguins - albatross - chick - rspb | 25 | 481_penguin_penguins_albatross_chick |
| 482 | statue - liberty - ellis - island - phuket | 25 | 482_statue_liberty_ellis_island |
| 483 | polio - vaccination - pakistan - vaccine - antipolio | 25 | 483_polio_vaccination_pakistan_vaccine |
| 484 | carnage - magaluf - crawl - reveller - roki | 25 | 484_carnage_magaluf_crawl_reveller |
| 485 | dad - mom - noa - cry - parenting | 25 | 485_dad_mom_noa_cry |
| 486 | nyad - swim - nyads - mccardel - jellyfish | 25 | 486_nyad_swim_nyads_mccardel |
| 487 | redskins - native - snyder - name - mascot | 25 | 487_redskins_native_snyder_name |
| 488 | mcafee - belize - faull - mcafees - guatemala | 25 | 488_mcafee_belize_faull_mcafees |
| 489 | mousa - inquiry - lawyers - shiner - iraqis | 25 | 489_mousa_inquiry_lawyers_shiner |
| 490 | orleans - katrina - traylor - levee - hurricane | 24 | 490_orleans_katrina_traylor_levee |
| 491 | alligator - gator - reptile - bobcat - alligators | 24 | 491_alligator_gator_reptile_bobcat |
| 492 | eurostar - train - tunnel - rail - confino | 24 | 492_eurostar_train_tunnel_rail |
| 493 | passport - backlog - application - pugh - office | 24 | 493_passport_backlog_application_pugh |
| 494 | stripper - barbash - gristina - lusty - prostitution | 24 | 494_stripper_barbash_gristina_lusty |
| 495 | caffeine - coffee - drink - energy - drinks | 24 | 495_caffeine_coffee_drink_energy |
| 496 | qaeda - al - yemen - alawlaki - embassy | 24 | 496_qaeda_al_yemen_alawlaki |
| 497 | saudi - olympic - arabia - athlete - ioc | 24 | 497_saudi_olympic_arabia_athlete |
| 498 | greenpeace - arctic - russian - sunrise - activist | 24 | 498_greenpeace_arctic_russian_sunrise |
| 499 | pryce - huhne - briscoe - vicky - speeding | 24 | 499_pryce_huhne_briscoe_vicky |
| 500 | adhd - disorder - hyperactivity - ritalin - stimulant | 24 | 500_adhd_disorder_hyperactivity_ritalin |
| 501 | makeup - beauty - lipstick - lip - skin | 24 | 501_makeup_beauty_lipstick_lip |
| 502 | jesus - manuscript - papyrus - gospel - bible | 24 | 502_jesus_manuscript_papyrus_gospel |
| 503 | student - students - university - cambridge - drinking | 24 | 503_student_students_university_cambridge |
| 504 | twitter - tweet - kutcher - twitters - hashtags | 24 | 504_twitter_tweet_kutcher_twitters |
| 505 | triathlon - workout - crossfit - bike - brynn | 23 | 505_triathlon_workout_crossfit_bike |
| 506 | mcdonnell - mcdonnells - maureen - williams - morrissey | 23 | 506_mcdonnell_mcdonnells_maureen_williams |
| 507 | sport - pe - olympic - olympics - school | 23 | 507_sport_pe_olympic_olympics |
| 508 | winkle - fieri - walmart - degraff - stolen | 23 | 508_winkle_fieri_walmart_degraff |
| 509 | morgan - roper - walmart - limo - morgans | 23 | 509_morgan_roper_walmart_limo |
| 510 | bag - plastic - bags - 5p - singleuse | 23 | 510_bag_plastic_bags_5p |
| 511 | howard - pc - bowman - discrimination - tribunal | 23 | 511_howard_pc_bowman_discrimination |
| 512 | tostee - tostees - gable - warriena - wright | 23 | 512_tostee_tostees_gable_warriena |
| 513 | f35 - f22 - fighter - air - jet | 23 | 513_f35_f22_fighter_air |
| 514 | greste - fahmy - jazeera - mohamed - baher | 23 | 514_greste_fahmy_jazeera_mohamed |
| 515 | jutting - kong - juttings - hong - rurik | 23 | 515_jutting_kong_juttings_hong |
| 516 | dna - genome - synthetic - yeast - genetic | 23 | 516_dna_genome_synthetic_yeast |
| 517 | gammy - surrogacy - surrogate - thai - gammys | 23 | 517_gammy_surrogacy_surrogate_thai |
| 518 | collins - grant - 7th - faye - heaven | 23 | 518_collins_grant_7th_faye |
| 519 | pension - annuity - pensions - saver - retirement | 23 | 519_pension_annuity_pensions_saver |
| 520 | russell - housewives - russells - barrino - taylor | 23 | 520_russell_housewives_russells_barrino |
| 521 | council - a4e - councillor - parryjones - allowance | 23 | 521_council_a4e_councillor_parryjones |
| 522 | maternal - childbirth - reproductive - birth - mortality | 23 | 522_maternal_childbirth_reproductive_birth |
| 523 | riga - moscow - latvias - fire - russia | 22 | 523_riga_moscow_latvias_fire |
| 524 | hollande - tax - french - arnault - france | 22 | 524_hollande_tax_french_arnault |
| 525 | mps - ipsa - expense - mp - salary | 22 | 525_mps_ipsa_expense_mp |
| 526 | jackson - jacksons - michael - goodall - balser | 22 | 526_jackson_jacksons_michael_goodall |
| 527 | vodianova - moscow - prokudingorsky - tsar - nv | 22 | 527_vodianova_moscow_prokudingorsky_tsar |
| 528 | horse - isaacson - ruggeasey - bronwen - winterburn | 22 | 528_horse_isaacson_ruggeasey_bronwen |
| 529 | fayed - diana - dianas - dodi - burrell | 22 | 529_fayed_diana_dianas_dodi |
| 530 | betting - gambling - fobts - shop - bookmaker | 22 | 530_betting_gambling_fobts_shop |
| 531 | lawrence - norris - dobson - stephen - acourt | 22 | 531_lawrence_norris_dobson_stephen |
| 532 | kobane - kurdish - kobani - isis - turkey | 22 | 532_kobane_kurdish_kobani_isis |
| 533 | cliff - sir - bbc - yorkshire - raid | 22 | 533_cliff_sir_bbc_yorkshire |
| 534 | atf - furious - holder - fast - osorioarellanes | 22 | 534_atf_furious_holder_fast |
| 535 | hpv - vaccine - cervical - cancer - gardasil | 22 | 535_hpv_vaccine_cervical_cancer |
| 536 | bitcoin - currency - bitcoins - digital - virtual | 21 | 536_bitcoin_currency_bitcoins_digital |
| 537 | westboro - baptist - church - phelps - picket | 21 | 537_westboro_baptist_church_phelps |
| 538 | flu - h7n9 - virus - poultry - bird | 21 | 538_flu_h7n9_virus_poultry |
| 539 | bowl - puppy - super - ad - godaddy | 21 | 539_bowl_puppy_super_ad |
| 540 | pele - peles - kidney - einstein - edinho | 21 | 540_pele_peles_kidney_einstein |
| 541 | eurovision - contest - conchita - song - azerbaijan | 21 | 541_eurovision_contest_conchita_song |
| 542 | bucket - als - challenge - ice - frates | 21 | 542_bucket_als_challenge_ice |
| 543 | guantanamo - detainee - prisoner - gitmo - bay | 21 | 543_guantanamo_detainee_prisoner_gitmo |
| 544 | autism - autistic - ocd - disorder - nac | 21 | 544_autism_autistic_ocd_disorder |
| 545 | gaza - blockade - israeli - israel - ship | 21 | 545_gaza_blockade_israeli_israel |
| 546 | dotcom - megaupload - dotcoms - copyright - piracy | 21 | 546_dotcom_megaupload_dotcoms_copyright |
| 547 | milk - melamine - dairy - arsenic - rice | 21 | 547_milk_melamine_dairy_arsenic |
| 548 | breast - mastectomy - cancer - angelina - gene | 21 | 548_breast_mastectomy_cancer_angelina |
| 549 | netflix - streaming - tv - netflixs - wuaki | 21 | 549_netflix_streaming_tv_netflixs |
| 550 | hobbit - tolkien - rings - trilogy - tolkiens | 21 | 550_hobbit_tolkien_rings_trilogy |
| 551 | heathrow - runway - airport - estuary - boris | 21 | 551_heathrow_runway_airport_estuary |
| 552 | dow - stock - sp - nasdaq - index | 21 | 552_dow_stock_sp_nasdaq |
| 553 | kassig - isis - doureihi - alberici - mansouri | 21 | 553_kassig_isis_doureihi_alberici |
| 554 | ferry - capsized - boat - bangladesh - sank | 21 | 554_ferry_capsized_boat_bangladesh |
| 555 | lunch - school - nutrition - food - healthier | 21 | 555_lunch_school_nutrition_food |
| 556 | mitochondrial - embryo - mitochondrion - dna - egg | 21 | 556_mitochondrial_embryo_mitochondrion_dna |
| 557 | rivers - endoscopy - korovin - yorkville - joan | 21 | 557_rivers_endoscopy_korovin_yorkville |
| 558 | cull - badger - tb - mcintosh - culling | 21 | 558_cull_badger_tb_mcintosh |
| 559 | tribe - indigenous - guarani - indians - totobiegosode | 20 | 559_tribe_indigenous_guarani_indians |
| 560 | policing - officer - crime - constable - pcsos | 20 | 560_policing_officer_crime_constable |
| 561 | student - strickland - barron - barrons - school | 20 | 561_student_strickland_barron_barrons |
| 562 | flag - union - saltire - flags - nepalese | 20 | 562_flag_union_saltire_flags |
| 563 | hagel - secretary - pentagon - carter - defense | 20 | 563_hagel_secretary_pentagon_carter |
| 564 | pole - trek - harry - antarctica - wounded | 20 | 564_pole_trek_harry_antarctica |
| 565 | loshagin - rakossi - gorulenko - kurochkin - abdullaev | 20 | 565_loshagin_rakossi_gorulenko_kurochkin |
| 566 | roma - ruseva - maria - greece - bulgarian | 20 | 566_roma_ruseva_maria_greece |
| 567 | disney - elsa - frozen - menzel - cinderella | 20 | 567_disney_elsa_frozen_menzel |
| 568 | han - subway - train - platform - mickens | 20 | 568_han_subway_train_platform |
| 569 | strictly - dance - dancing - alesha - bussell | 20 | 569_strictly_dance_dancing_alesha |
| 570 | nobel - prize - peace - oslo - award | 20 | 570_nobel_prize_peace_oslo |
| 571 | handwriting - font - typewriter - suvir - cursive | 20 | 571_handwriting_font_typewriter_suvir |
| 572 | rmt - strike - tube - aslef - tfl | 20 | 572_rmt_strike_tube_aslef |
| 573 | vitamin - supplement - calcium - irx3 - multivitamin | 20 | 573_vitamin_supplement_calcium_irx3 |
| 574 | kilpatrick - nagin - beatty - detroit - kilpatricks | 20 | 574_kilpatrick_nagin_beatty_detroit |
| 575 | boulder - flooding - rain - colorado - county | 20 | 575_boulder_flooding_rain_colorado |
| 576 | skin - cream - collagen - venom - wrinkle | 20 | 576_skin_cream_collagen_venom |
| 577 | catalonia - catalan - independence - spains - spain | 20 | 577_catalonia_catalan_independence_spains |
| 578 | rail - highspeed - transportation - project - amtrak | 20 | 578_rail_highspeed_transportation_project |
| 579 | arpaio - arpaios - sheriff - maricopa - latinos | 20 | 579_arpaio_arpaios_sheriff_maricopa |
| 580 | polanski - polanskis - geimer - roman - 13yearold | 19 | 580_polanski_polanskis_geimer_roman |
| 581 | algerian - hostage - algeria - mali - belmokhtar | 19 | 581_algerian_hostage_algeria_mali |
| 582 | prosthetic - hand - ilimb - bionic - limb | 19 | 582_prosthetic_hand_ilimb_bionic |
| 583 | cho - nut - korean - macadamia - hyunah | 19 | 583_cho_nut_korean_macadamia |
| 584 | ascot - racegoers - hat - racegoer - dress | 19 | 584_ascot_racegoers_hat_racegoer |
| 585 | bin - laden - ladens - oneill - osama | 19 | 585_bin_laden_ladens_oneill |
| 586 | wisconsin - walker - bargaining - walkers - collective | 19 | 586_wisconsin_walker_bargaining_walkers |
| 587 | corruption - corrupt - index - transparency - ranked | 19 | 587_corruption_corrupt_index_transparency |
| 588 | bridge - bridges - span - skagit - collapse | 19 | 588_bridge_bridges_span_skagit |
| 589 | thailand - armstrongbland - janson - koh - hart | 19 | 589_thailand_armstrongbland_janson_koh |
| 590 | bieber - justin - selena - biebers - gomez | 19 | 590_bieber_justin_selena_biebers |
| 591 | jahi - jahis - mcmath - winkfield - oakland | 19 | 591_jahi_jahis_mcmath_winkfield |
| 592 | carnival - notting - hill - festival - reveller | 19 | 592_carnival_notting_hill_festival |
| 593 | fat - weight - settles - plussize - hopkins | 19 | 593_fat_weight_settles_plussize |
| 594 | train - rail - railway - engineering - trains | 19 | 594_train_rail_railway_engineering |
| 595 | benefits - benefit - minkin - stephanie - sisarova | 19 | 595_benefits_benefit_minkin_stephanie |
| 596 | rambold - baugh - cherice - hanlon - rambolds | 19 | 596_rambold_baugh_cherice_hanlon |
| 597 | neknominate - alcohol - drinking - vodka - craze | 19 | 597_neknominate_alcohol_drinking_vodka |
| 598 | dyson - vacuum - cleaner - gtech - dysons | 19 | 598_dyson_vacuum_cleaner_gtech |
| 599 | ulbricht - silk - bitcoins - bitcoin - ulbrichts | 19 | 599_ulbricht_silk_bitcoins_bitcoin |
| 600 | lights - milky - aurora - sky - northern | 19 | 600_lights_milky_aurora_sky |
| 601 | budget - pentagon - defense - panetta - sequestration | 18 | 601_budget_pentagon_defense_panetta |
| 602 | ivins - anthrax - lab - smallpox - cdc | 18 | 602_ivins_anthrax_lab_smallpox |
| 603 | marikana - lonmin - mine - miner - platinum | 18 | 603_marikana_lonmin_mine_miner |
| 604 | belcher - perkins - chiefs - martel - arrowhead | 18 | 604_belcher_perkins_chiefs_martel |
| 605 | cholera - mintz - juba - flooding - flood | 18 | 605_cholera_mintz_juba_flooding |
| 606 | navy - hms - commander - portland - cdr | 18 | 606_navy_hms_commander_portland |
| 607 | chestnut - kobayashi - nathans - hot - contest | 18 | 607_chestnut_kobayashi_nathans_hot |
| 608 | nepal - maoists - maoist - nepals - katawal | 18 | 608_nepal_maoists_maoist_nepals |
| 609 | watts - reese - witherspoon - toth - lucas | 18 | 609_watts_reese_witherspoon_toth |
| 610 | manhattan - city - neon - downtown - york | 18 | 610_manhattan_city_neon_downtown |
| 611 | amoeba - kali - naegleria - kalis - hardig | 18 | 611_amoeba_kali_naegleria_kalis |
| 612 | mayan - apocalypse - maya - calendar - mayans | 18 | 612_mayan_apocalypse_maya_calendar |
| 613 | nuclear - treaty - weapon - russia - missile | 18 | 613_nuclear_treaty_weapon_russia |
| 614 | hinckley - hinckleys - brady - reagan - williamsburg | 18 | 614_hinckley_hinckleys_brady_reagan |
| 615 | toy - moshi - playdoh - shopkins - toys | 18 | 615_toy_moshi_playdoh_shopkins |
| 616 | buenos - aires - argentina - aymara - salva | 18 | 616_buenos_aires_argentina_aymara |
| 617 | chinese - wu - china - chinas - gay | 18 | 617_chinese_wu_china_chinas |
| 618 | mack - schaefer - macks - wiesemack - bali | 18 | 618_mack_schaefer_macks_wiesemack |
| 619 | accent - dialect - cromarty - language - syndrome | 18 | 619_accent_dialect_cromarty_language |
| 620 | pupil - barnwell - teaching - camurat - school | 18 | 620_pupil_barnwell_teaching_camurat |
| 621 | wolfe - wren - gosk - skyler - gender | 18 | 621_wolfe_wren_gosk_skyler |
| 622 | plate - database - camera - cctv - license | 18 | 622_plate_database_camera_cctv |
| 623 | carta - magna - 1215 - library - copy | 18 | 623_carta_magna_1215_library |
| 624 | marilyn - monroe - jasgur - photograph - monroes | 18 | 624_marilyn_monroe_jasgur_photograph |
| 625 | apparel - advert - slogan - tshirt - amazon | 18 | 625_apparel_advert_slogan_tshirt |
| 626 | viagra - oestrogen - menopause - erectile - erection | 18 | 626_viagra_oestrogen_menopause_erectile |
| 627 | meriam - wani - sudanese - ibrahim - sudan | 18 | 627_meriam_wani_sudanese_ibrahim |
| 628 | spaceshiptwo - galactic - virgin - alsbury - mojave | 18 | 628_spaceshiptwo_galactic_virgin_alsbury |
| 629 | rubbish - bin - waste - collection - council | 18 | 629_rubbish_bin_waste_collection |
| 630 | pardon - barbour - ozment - pardoned - mississippi | 18 | 630_pardon_barbour_ozment_pardoned |
| 631 | pilot - navy - jet - besal - coast | 18 | 631_pilot_navy_jet_besal |
| 632 | bollywood - slumdog - bachchan - film - cinema | 17 | 632_bollywood_slumdog_bachchan_film |
| 633 | holiday - termtime - school - fine - fined | 17 | 633_holiday_termtime_school_fine |
| 634 | amoudi - paton - alwaleed - saudi - prince | 17 | 634_amoudi_paton_alwaleed_saudi |
| 635 | evolution - creationism - scientific - intelligent - darwins | 17 | 635_evolution_creationism_scientific_intelligent |
| 636 | hepatitis - kwiatkowski - dental - patient - dentist | 17 | 636_hepatitis_kwiatkowski_dental_patient |
| 637 | marrocco - transplant - arm - lautzenheiser - limb | 17 | 637_marrocco_transplant_arm_lautzenheiser |
| 638 | clown - clowns - northampton - beep - coulrophobia | 17 | 638_clown_clowns_northampton_beep |
| 639 | flowers - coop - methodist - cooperative - bank | 17 | 639_flowers_coop_methodist_cooperative |
| 640 | artificial - ai - deepmind - intelligence - machine | 17 | 640_artificial_ai_deepmind_intelligence |
| 641 | venables - bulger - fergus - bootle - thompson | 17 | 641_venables_bulger_fergus_bootle |
| 642 | badenclay - allison - allisons - badenclays - gerard | 17 | 642_badenclay_allison_allisons_badenclays |
| 643 | grayling - prisoner - offender - prison - justice | 17 | 643_grayling_prisoner_offender_prison |
| 644 | fritzl - elisabeth - cellar - amstetten - josef | 17 | 644_fritzl_elisabeth_cellar_amstetten |
| 645 | bacteria - germ - antibacterial - towel - wash | 17 | 645_bacteria_germ_antibacterial_towel |
| 646 | statin - statins - cholesterol - yeast - risk | 17 | 646_statin_statins_cholesterol_yeast |
| 647 | drone - iran - iranian - rq170 - aircraft | 17 | 647_drone_iran_iranian_rq170 |
| 648 | watkins - mjadzelics - lostprophets - pontypridd - ian | 17 | 648_watkins_mjadzelics_lostprophets_pontypridd |
| 649 | gates - crowley - cambridge - stupidly - harvard | 17 | 649_gates_crowley_cambridge_stupidly |
| 650 | roma - valls - camp - evry - france | 16 | 650_roma_valls_camp_evry |
| 651 | barzee - mitchell - smart - mitchells - smarts | 16 | 651_barzee_mitchell_smart_mitchells |
| 652 | hindley - brady - saddleworth - keith - keiths | 16 | 652_hindley_brady_saddleworth_keith |
| 653 | implant - ear - cochlear - hearing - deaf | 16 | 653_implant_ear_cochlear_hearing |
| 654 | text - internet - texting - pew - survey | 16 | 654_text_internet_texting_pew |
| 655 | jagger - lwren - stones - mick - scotts | 16 | 655_jagger_lwren_stones_mick |
| 656 | beatrix - maxima - willemalexander - queen - throne | 16 | 656_beatrix_maxima_willemalexander_queen |
| 657 | maglev - train - highspeed - rail - railway | 16 | 657_maglev_train_highspeed_rail |
| 658 | davos - wef - forum - economic - delegate | 16 | 658_davos_wef_forum_economic |
| 659 | abbott - putin - g20 - apec - summit | 16 | 659_abbott_putin_g20_apec |
| 660 | nelson - nelsons - trafalgar - hms - admiral | 16 | 660_nelson_nelsons_trafalgar_hms |
| 661 | marijuana - drug - uruguay - coca - decriminalization | 16 | 661_marijuana_drug_uruguay_coca |
| 662 | oni - konye - acid - naomi - niqab | 16 | 662_oni_konye_acid_naomi |
| 663 | skirt - uniform - trouser - pupil - school | 16 | 663_skirt_uniform_trouser_pupil |
| 664 | diet - dairy - eating - fat - weight | 16 | 664_diet_dairy_eating_fat |
| 665 | violin - stradivarius - instrument - stradivari - violins | 16 | 665_violin_stradivarius_instrument_stradivari |
| 666 | reef - container - mnz - rena - ship | 16 | 666_reef_container_mnz_rena |
| 667 | gabor - anhalt - zsa - gabors - von | 16 | 667_gabor_anhalt_zsa_gabors |
| 668 | ecigarette - charger - ecigarettes - exploded - charging | 16 | 668_ecigarette_charger_ecigarettes_exploded |
| 669 | ski - chalet - skiing - resort - skier | 16 | 669_ski_chalet_skiing_resort |
| 670 | thai - rohingya - thailand - myanmar - refugee | 16 | 670_thai_rohingya_thailand_myanmar |
| 671 | porn - condom - hiv - leathers - moratorium | 16 | 671_porn_condom_hiv_leathers |
| 672 | omega3 - fish - triglyceride - oily - fatty | 16 | 672_omega3_fish_triglyceride_oily |
| 673 | amish - mullet - mullets - beard - haircutting | 16 | 673_amish_mullet_mullets_beard |
| 674 | houston - houstons - whitney - winans - davis | 16 | 674_houston_houstons_whitney_winans |
| 675 | sierra - sarai - sierras - istanbul - galata | 16 | 675_sierra_sarai_sierras_istanbul |
| 676 | ballet - dance - mickael - dancer - acosta | 16 | 676_ballet_dance_mickael_dancer |
| 677 | jackson - jesse - sandi - jacksons - jr | 16 | 677_jackson_jesse_sandi_jacksons |
| 678 | taser - tasers - stun - officer - laudisio | 16 | 678_taser_tasers_stun_officer |
| 679 | revenge - porn - bollaert - explicit - posting | 16 | 679_revenge_porn_bollaert_explicit |
| 680 | expedition - antarctic - scotts - shackleton - shackletons | 16 | 680_expedition_antarctic_scotts_shackleton |
| 681 | wheatley - hmp - prison - standford - prisoner | 16 | 681_wheatley_hmp_prison_standford |
| 682 | queen - portrait - belmar - warhol - royal | 15 | 682_queen_portrait_belmar_warhol |
| 683 | huangs - huang - qatari - qatar - gloria | 15 | 683_huangs_huang_qatari_qatar |
| 684 | favourite - fan - yougov - personalitywise - hobbies | 15 | 684_favourite_fan_yougov_personalitywise |
| 685 | knight - suge - shakur - rap - compton | 15 | 685_knight_suge_shakur_rap |
| 686 | dozier - kimmerle - marianna - buried - graf | 15 | 686_dozier_kimmerle_marianna_buried |
| 687 | fcc - neutrality - internet - provider - net | 15 | 687_fcc_neutrality_internet_provider |
| 688 | immigration - whitman - immigrant - undocumented - citizenship | 15 | 688_immigration_whitman_immigrant_undocumented |
| 689 | tower - building - skyscraper - subway - pavilion | 15 | 689_tower_building_skyscraper_subway |
| 690 | rebecca - bullying - judd - rebeccas - sedwick | 15 | 690_rebecca_bullying_judd_rebeccas |
| 691 | berezovsky - abramovich - pugachev - chistyakov - oligarch | 15 | 691_berezovsky_abramovich_pugachev_chistyakov |
| 692 | rat - rats - pest - rodent - piedmont | 15 | 692_rat_rats_pest_rodent |
| 693 | cameron - samantha - ivan - camerons - chequers | 15 | 693_cameron_samantha_ivan_camerons |
| 694 | blasio - mayor - snow - de - roker | 15 | 694_blasio_mayor_snow_de |
| 695 | kitty - hello - sanrio - tsuji - cat | 15 | 695_kitty_hello_sanrio_tsuji |
| 696 | megrahi - lockerbie - almegrahi - megrahis - libya | 15 | 696_megrahi_lockerbie_almegrahi_megrahis |
| 697 | lexi - hollinghurst - sandpiper - liam - operator | 15 | 697_lexi_hollinghurst_sandpiper_liam |
| 698 | ons - married - stepfamilies - proportion - cent | 15 | 698_ons_married_stepfamilies_proportion |
| 699 | mckenna - miller - bode - beck - millers | 15 | 699_mckenna_miller_bode_beck |
| 700 | implant - pip - implants - breast - mas | 15 | 700_implant_pip_implants_breast |
| 701 | blasio - nypd - mayor - de - garner | 15 | 701_blasio_nypd_mayor_de |
| 702 | drug - heroin - drugs - ecstasy - khat | 15 | 702_drug_heroin_drugs_ecstasy |
| 703 | williams - robin - depression - doubtfire - parkinsons | 15 | 703_williams_robin_depression_doubtfire |
| 704 | niger - oil - delta - mend - nigerias | 15 | 704_niger_oil_delta_mend |
| 705 | spains - protest - madrid - demonstration - austerity | 15 | 705_spains_protest_madrid_demonstration |
| 706 | dakar - peterhansel - despres - rally - stage | 15 | 706_dakar_peterhansel_despres_rally |
| 707 | missile - satellite - defense - pentagon - hydrazine | 15 | 707_missile_satellite_defense_pentagon |
| 708 | mammoth - woolly - tusk - fossil - mammoths | 15 | 708_mammoth_woolly_tusk_fossil |
| 709 | qatada - qatadas - jordan - abu - deport | 15 | 709_qatada_qatadas_jordan_abu |
| 710 | ciancia - tsa - hernandez - airport - gerardo | 15 | 710_ciancia_tsa_hernandez_airport |
| 711 | spider - bite - bitten - widow - false | 15 | 711_spider_bite_bitten_widow |
| 712 | roaming - bt - mobile - broadband - comcast | 15 | 712_roaming_bt_mobile_broadband |
| 713 | labeouf - shia - labeoufs - actor - clowes | 15 | 713_labeouf_shia_labeoufs_actor |
| 714 | hajj - pilgrim - mecca - pilgrimage - kaaba | 15 | 714_hajj_pilgrim_mecca_pilgrimage |
| 715 | prabowo - widodo - jakarta - indonesia - jokowi | 15 | 715_prabowo_widodo_jakarta_indonesia |
| 716 | duggan - duggans - ipcc - tottenham - mark | 15 | 716_duggan_duggans_ipcc_tottenham |
| 717 | christmas - festive - shopping - gift - spend | 14 | 717_christmas_festive_shopping_gift |
| 718 | nujood - marriage - saudi - nada - yemen | 14 | 718_nujood_marriage_saudi_nada |
| 719 | bag - herms - mulberry - vuitton - oshkosh | 14 | 719_bag_herms_mulberry_vuitton |
| 720 | insect - salmon - bickerton - locust - fish | 14 | 720_insect_salmon_bickerton_locust |
| 721 | isis - jordanian - alkasasbeh - islamic - kasasbeh | 14 | 721_isis_jordanian_alkasasbeh_islamic |
| 722 | mississippi - river - atchafalaya - spillway - vicksburg | 14 | 722_mississippi_river_atchafalaya_spillway |
| 723 | eroshevich - stern - smiths - smith - kapoor | 14 | 723_eroshevich_stern_smiths_smith |
| 724 | munoz - marlise - erick - fetus - munozs | 14 | 724_munoz_marlise_erick_fetus |
| 725 | fata - medicare - fatas - fraud - medicaid | 14 | 725_fata_medicare_fatas_fraud |
| 726 | horman - kyron - terri - kyrons - kaine | 14 | 726_horman_kyron_terri_kyrons |
| 727 | prescription - heroin - drug - opioid - painkiller | 14 | 727_prescription_heroin_drug_opioid |
| 728 | karoshi - strike - france - paris - aulnay | 14 | 728_karoshi_strike_france_paris |
| 729 | weier - geyser - slender - slenderman - leutner | 14 | 729_weier_geyser_slender_slenderman |
| 730 | dementia - alzheimers - diagnosis - cure - pfizer | 14 | 730_dementia_alzheimers_diagnosis_cure |
| 731 | typhoon - taiwan - philippines - fujian - xinhua | 14 | 731_typhoon_taiwan_philippines_fujian |
| 732 | python - cleese - monty - pythons - idle | 14 | 732_python_cleese_monty_pythons |
| 733 | william - raf - helicopter - prince - duke | 14 | 733_william_raf_helicopter_prince |
| 734 | clooney - amal - alamuddin - clooneys - amals | 14 | 734_clooney_amal_alamuddin_clooneys |
| 735 | porn - pornography - explicit - sexting - online | 14 | 735_porn_pornography_explicit_sexting |
| 736 | orchestra - gergiev - conductor - symphony - musician | 14 | 736_orchestra_gergiev_conductor_symphony |
| 737 | bout - farc - indictment - dea - hunter | 14 | 737_bout_farc_indictment_dea |
| 738 | galactic - virgin - spaceport - branson - space | 14 | 738_galactic_virgin_spaceport_branson |
| 739 | ageing - apoe - vitamin - lifespan - alzheimers | 14 | 739_ageing_apoe_vitamin_lifespan |
| 740 | bangkok - moradi - thai - iranian - israeli | 14 | 740_bangkok_moradi_thai_iranian |
| 741 | foreclosure - foreclosed - trump - hud - kollars | 14 | 741_foreclosure_foreclosed_trump_hud |
| 742 | gosselin - jon - kate - mady - tlc | 14 | 742_gosselin_jon_kate_mady |
| 743 | patient - burley - iqbal - gmc - panel | 14 | 743_patient_burley_iqbal_gmc |
| 744 | rousseff - brazils - brazil - neves - brazilian | 14 | 744_rousseff_brazils_brazil_neves |
| 745 | chism - ritzer - danvers - ritzers - chisms | 14 | 745_chism_ritzer_danvers_ritzers |
| 746 | mali - gourdel - french - belmoktar - hostage | 14 | 746_mali_gourdel_french_belmoktar |
| 747 | castresana - montt - guatemala - ixil - rios | 14 | 747_castresana_montt_guatemala_ixil |
| 748 | spelling - bee - lala - scripps - kush | 14 | 748_spelling_bee_lala_scripps |
| 749 | ipo - zuckerberg - facebooks - stock - facebook | 14 | 749_ipo_zuckerberg_facebooks_stock |
| 750 | zanzibar - trup - gee - kirstie - acid | 14 | 750_zanzibar_trup_gee_kirstie |
| 751 | jubilee - tower - queen - diamond - frecklington | 13 | 751_jubilee_tower_queen_diamond |
| 752 | carlisle - lorry - clarke - carlisles - a64 | 13 | 752_carlisle_lorry_clarke_carlisles |
| 753 | chinese - china - confucius - education - student | 13 | 753_chinese_china_confucius_education |
| 754 | garden - snowdrop - sidmouth - plant - tree | 13 | 754_garden_snowdrop_sidmouth_plant |
| 755 | deforestation - forest - rainforest - indonesia - dioxide | 13 | 755_deforestation_forest_rainforest_indonesia |
| 756 | clark - clarks - huguette - heiress - reclusive | 13 | 756_clark_clarks_huguette_heiress |
| 757 | migraine - headache - aura - trigeminal - neuralgia | 13 | 757_migraine_headache_aura_trigeminal |
| 758 | clinton - clintons - clot - hillary - concussion | 13 | 758_clinton_clintons_clot_hillary |
| 759 | dmlaterbundle - twittervia - zann - lanza - ilfracombe | 13 | 759_dmlaterbundle_twittervia_zann_lanza |
| 760 | fashion - african - bortolussi - designer - kinabuti | 13 | 760_fashion_african_bortolussi_designer |
| 761 | cliff - chalet - ridgemont - erosion - landslide | 13 | 761_cliff_chalet_ridgemont_erosion |
| 762 | thanksgiving - aaa - traveler - travel - airline | 13 | 762_thanksgiving_aaa_traveler_travel |
| 763 | mccluskie - mccluskies - gemma - canal - eastenders | 13 | 763_mccluskie_mccluskies_gemma_canal |
| 764 | heaney - poet - thomass - thomas - poetry | 13 | 764_heaney_poet_thomass_thomas |
| 765 | seat - stroller - graco - recall - bumbo | 13 | 765_seat_stroller_graco_recall |
| 766 | pension - retirement - age - pensions - bichard | 13 | 766_pension_retirement_age_pensions |
| 767 | tiller - roeder - tillers - abortion - antiabortion | 13 | 767_tiller_roeder_tillers_abortion |
| 768 | japanese - okinawa - hadnott - japan - guam | 13 | 768_japanese_okinawa_hadnott_japan |
| 769 | arafat - arafats - polonium - palestinian - polonium210 | 13 | 769_arafat_arafats_polonium_palestinian |
| 770 | laden - bin - thirty - cia - zero | 13 | 770_laden_bin_thirty_cia |
| 771 | exorcism - possessed - exorcist - demon - priest | 13 | 771_exorcism_possessed_exorcist_demon |
| 772 | refugee - guterres - syrian - syrians - refugees | 13 | 772_refugee_guterres_syrian_syrians |
| 773 | facebook - snapchat - user - social - princeton | 13 | 773_facebook_snapchat_user_social |
| 774 | itu - internet - treaty - wcit - telecommunication | 13 | 774_itu_internet_treaty_wcit |
| 775 | keyes - koenig - anchorage - koenigs - currier | 13 | 775_keyes_koenig_anchorage_koenigs |
| 776 | mi6 - williams - holdall - gareth - bag | 13 | 776_mi6_williams_holdall_gareth |
| 777 | whiplash - insurance - insurer - motor - premium | 13 | 777_whiplash_insurance_insurer_motor |
| 778 | signhild - snyder - reginella - lynsi - kotak | 13 | 778_signhild_snyder_reginella_lynsi |
| 779 | pemberton - laude - philippine - olongapo - manila | 13 | 779_pemberton_laude_philippine_olongapo |
| 780 | crime - force - constable - inspector - unrecorded | 13 | 780_crime_force_constable_inspector |
| 781 | jews - antisemitic - antisemitism - jewish - holocaust | 13 | 781_jews_antisemitic_antisemitism_jewish |
| 782 | vineberg - hoffman - heroin - hoffmans - seymour | 13 | 782_vineberg_hoffman_heroin_hoffmans |
| 783 | airasia - indonesian - fuselage - surabaya - plane | 13 | 783_airasia_indonesian_fuselage_surabaya |
| 784 | population - billion - stutz - growth - cartogram | 13 | 784_population_billion_stutz_growth |
| 785 | earthquake - quake - magnitude - iran - irna | 13 | 785_earthquake_quake_magnitude_iran |
| 786 | restaurant - chef - michelin - roca - dish | 12 | 786_restaurant_chef_michelin_roca |
| 787 | benghazi - consulate - stevens - libya - ambassador | 12 | 787_benghazi_consulate_stevens_libya |
| 788 | greenslate - food - stamp - snap - stamps | 12 | 788_greenslate_food_stamp_snap |
| 789 | dookhan - dookhans - chemist - lab - massachusetts | 12 | 789_dookhan_dookhans_chemist_lab |
| 790 | samesex - gender - seijas - gay - lulu | 12 | 790_samesex_gender_seijas_gay |
| 791 | contraception - religious - contraceptive - coverage - mandate | 12 | 791_contraception_religious_contraceptive_coverage |
| 792 | zellweger - kabbalah - ellar - horton - bradley | 12 | 792_zellweger_kabbalah_ellar_horton |
| 793 | prom - farves - homecoming - davuluri - kropp | 12 | 793_prom_farves_homecoming_davuluri |
| 794 | gurion - tel - aviv - israel - airline | 12 | 794_gurion_tel_aviv_israel |
| 795 | rotherham - sexual - abuse - asian - grooming | 12 | 795_rotherham_sexual_abuse_asian |
| 796 | hair - haircut - shave - academy - shaved | 12 | 796_hair_haircut_shave_academy |
| 797 | paris - padlock - pont - seine - bridge | 12 | 797_paris_padlock_pont_seine |
| 798 | jolie - hague - summit - angelina - violence | 12 | 798_jolie_hague_summit_angelina |
| 799 | iplayer - bbc - licence - catchup - hd | 12 | 799_iplayer_bbc_licence_catchup |
| 800 | clock - westworth - daylight - sundial - maggiolo | 12 | 800_clock_westworth_daylight_sundial |
| 801 | rivers - joan - karen - apprentice - trump | 12 | 801_rivers_joan_karen_apprentice |
| 802 | strike - union - walkout - ballot - hedley | 12 | 802_strike_union_walkout_ballot |
| 803 | suri - katie - cruise - tom - holmes | 12 | 803_suri_katie_cruise_tom |
| 804 | oculus - virtual - headset - vr - rift | 12 | 804_oculus_virtual_headset_vr |
| 805 | rahman - hamlets - lutfur - cerit - electoral | 12 | 805_rahman_hamlets_lutfur_cerit |
| 806 | apd - tax - haul - duty - passenger | 12 | 806_apd_tax_haul_duty |
| 807 | delhi - commonwealth - games - india - fennell | 12 | 807_delhi_commonwealth_games_india |
| 808 | asiana - ktvu - 214 - ntsb - intern | 12 | 808_asiana_ktvu_214_ntsb |
| 809 | lucan - rivett - lucans - bingham - lord | 12 | 809_lucan_rivett_lucans_bingham |
| 810 | hut - beach - scalpay - island - widmouth | 12 | 810_hut_beach_scalpay_island |
| 811 | hamm - arnall - continental - hamms - oklahoma | 12 | 811_hamm_arnall_continental_hamms |
| 812 | game - gamers - violent - violence - gamergate | 12 | 812_game_gamers_violent_violence |
| 813 | immigrant - immigration - dolon - detention - deportation | 12 | 813_immigrant_immigration_dolon_detention |
| 814 | singapore - singapores - singaporeans - seng - yakuza | 11 | 814_singapore_singapores_singaporeans_seng |
| 815 | mendoza - catronio - ferrante - welker - marisa | 11 | 815_mendoza_catronio_ferrante_welker |
| 816 | x37b - ixv - space - orbit - rocket | 11 | 816_x37b_ixv_space_orbit |
| 817 | cambodia - cambodian - phnom - penh - aple | 11 | 817_cambodia_cambodian_phnom_penh |
| 818 | payment - zapp - wallet - mobile - looppay | 11 | 818_payment_zapp_wallet_mobile |
| 819 | muslims - husain - rupertsfault - ansari - muslim | 11 | 819_muslims_husain_rupertsfault_ansari |
| 820 | ravi - clementi - clementis - rutgers - webcam | 11 | 820_ravi_clementi_clementis_rutgers |
| 821 | prayer - freshwater - baptists - pledge - school | 11 | 821_prayer_freshwater_baptists_pledge |
| 822 | workout - plitt - fitness - tabata - norton | 11 | 822_workout_plitt_fitness_tabata |
| 823 | council - shiel - land - bunting - flowerbed | 11 | 823_council_shiel_land_bunting |
| 824 | christie - mcauliffe - cuccinelli - sarvis - booker | 11 | 824_christie_mcauliffe_cuccinelli_sarvis |
| 825 | comoros - moroni - yemenia - airbus - nadhoim | 11 | 825_comoros_moroni_yemenia_airbus |
| 826 | inapp - purchase - purchases - apple - refund | 11 | 826_inapp_purchase_purchases_apple |
| 827 | sewer - fatberg - blockage - wipe - fatbergs | 11 | 827_sewer_fatberg_blockage_wipe |
| 828 | helmet - hitchbot - ultrabike - plixi - bike | 11 | 828_helmet_hitchbot_ultrabike_plixi |
| 829 | havel - czech - prague - wenceslas - pragues | 11 | 829_havel_czech_prague_wenceslas |
| 830 | shereka - dartford - milby - scene - shot | 11 | 830_shereka_dartford_milby_scene |
| 831 | nobel - prize - steinman - gurdon - beutler | 11 | 831_nobel_prize_steinman_gurdon |
| 832 | teresa - giudice - giudices - joe - housewives | 11 | 832_teresa_giudice_giudices_joe |
| 833 | enfarinats - jarramplas - els - festival - ibi | 11 | 833_enfarinats_jarramplas_els_festival |
| 834 | squirrel - grey - squirrels - albino - red | 11 | 834_squirrel_grey_squirrels_albino |
| 835 | nio - heatwaves - warmest - temperature - el | 11 | 835_nio_heatwaves_warmest_temperature |
| 836 | vacation - hohlbaum - holiday - goodman - worker | 11 | 836_vacation_hohlbaum_holiday_goodman |
| 837 | kody - polygamy - wives - meri - robyn | 11 | 837_kody_polygamy_wives_meri |
| 838 | shoe - heel - shoes - uform - hassell | 11 | 838_shoe_heel_shoes_uform |
| 839 | coin - mint - coins - 1933 - minted | 11 | 839_coin_mint_coins_1933 |
| 840 | hong - kong - kongs - shui - kuek | 11 | 840_hong_kong_kongs_shui |
| 841 | nguyen - meitiv - nguyens - cancer - stilley | 11 | 841_nguyen_meitiv_nguyens_cancer |
| 842 | oil - isis - baiji - iraq - kurdistan | 10 | 842_oil_isis_baiji_iraq |
| 843 | skull - charie - lupak - luptak - graystock | 10 | 843_skull_charie_lupak_luptak |
| 844 | plant - greenhouse - mars - space - grow | 10 | 844_plant_greenhouse_mars_space |
| 845 | tree - branch - kew - toeppe - mcnulty | 10 | 845_tree_branch_kew_toeppe |
| 846 | ikea - ikeas - furniture - kamprad - fsc | 10 | 846_ikea_ikeas_furniture_kamprad |
| 847 | lincoln - lincolns - kunhardt - gettysburg - abraham | 10 | 847_lincoln_lincolns_kunhardt_gettysburg |
| 848 | ramsey - jonbenet - ramseys - patsy - boulder | 10 | 848_ramsey_jonbenet_ramseys_patsy |
| 849 | cjd - vcjd - prion - disease - cow | 10 | 849_cjd_vcjd_prion_disease |
| 850 | chemical - vocs - mattress - bpa - perchlorate | 10 | 850_chemical_vocs_mattress_bpa |
| 851 | basescu - ponta - romanias - nastase - traian | 10 | 851_basescu_ponta_romanias_nastase |
| 852 | cloud - clouds - lenticular - mammatus - nacreous | 10 | 852_cloud_clouds_lenticular_mammatus |
| 853 | rizzi - dog - doxy - flight - attendant | 10 | 853_rizzi_dog_doxy_flight |
| 854 | laser - railgun - weapon - electromagnetic - beam | 10 | 854_laser_railgun_weapon_electromagnetic |
| 855 | rigby - rigbys - fusilier - drummer - woolwich | 10 | 855_rigby_rigbys_fusilier_drummer |
| 856 | bobo - adams - bobos - autry - holly | 10 | 856_bobo_adams_bobos_autry |
| 857 | biofluorescence - socotra - ultraviolet - light - uv | 10 | 857_biofluorescence_socotra_ultraviolet_light |
| 858 | lambie - burqa - senator - jacqui - abdo | 10 | 858_lambie_burqa_senator_jacqui |
| 859 | laden - bin - qaeda - al - ladens | 10 | 859_laden_bin_qaeda_al |
| 860 | dog - pifas - kehnast - diablo - jess | 10 | 860_dog_pifas_kehnast_diablo |
| 861 | cathedral - pauls - tent - protester - camp | 10 | 861_cathedral_pauls_tent_protester |
| 862 | huguely - lacrosse - huguelys - yeardley - loves | 10 | 862_huguely_lacrosse_huguelys_yeardley |
| 863 | pakistan - afridi - shahzad - bin - pakistani | 10 | 863_pakistan_afridi_shahzad_bin |
| 864 | mandate - supreme - subsidy - individual - law | 10 | 864_mandate_supreme_subsidy_individual |
| 865 | hawking - als - mnd - hawkings - disease | 10 | 865_hawking_als_mnd_hawkings |
| 866 | clarkson - plate - gear - fkl - h982 | 10 | 866_clarkson_plate_gear_fkl |
| 867 | hostel - hotel - suite - guest - shangrila | 10 | 867_hostel_hotel_suite_guest |
| 868 | jordan - amman - arab - jordans - jordanian | 10 | 868_jordan_amman_arab_jordans |
| 869 | percival - kaufenberg - barnes - chrzaszcz - tonks | 10 | 869_percival_kaufenberg_barnes_chrzaszcz |
| 870 | teeth - whitening - brush - toothbrush - brushing | 10 | 870_teeth_whitening_brush_toothbrush |
| 871 | ranta - comers - comer - chaskel - deacon | 10 | 871_ranta_comers_comer_chaskel |
| 872 | derailed - train - lacmegantic - derailment - burkhardt | 10 | 872_derailed_train_lacmegantic_derailment |
| 873 | jessa - duggars - duggar - guthrie - savannah | 10 | 873_jessa_duggars_duggar_guthrie |
| 874 | expectancy - centenarian - older - index - agewatch | 10 | 874_expectancy_centenarian_older_index |
| 875 | buffett - berkshire - britt - buffetts - hathaway | 10 | 875_buffett_berkshire_britt_buffetts |
| 876 | smell - scent - odour - deodorant - sulphide | 10 | 876_smell_scent_odour_deodorant |
| 877 | sivia - mcrae - rawlings - knotweed - sison | 10 | 877_sivia_mcrae_rawlings_knotweed |
| 878 | poverty - economic - wage - mcpherson - income | 10 | 878_poverty_economic_wage_mcpherson |
| 879 | note - bank - lews - signature - lew | 10 | 879_note_bank_lews_signature |
| 880 | oktoberfest - beer - polizzi - festival - collodi | 10 | 880_oktoberfest_beer_polizzi_festival |
| 881 | abe - japans - kan - japan - hatoyama | 10 | 881_abe_japans_kan_japan |
| 882 | nahla - halle - aubry - berry - gabriel | 9 | 882_nahla_halle_aubry_berry |
| 883 | syria - congress - chemical - kerry - obama | 9 | 883_syria_congress_chemical_kerry |
| 884 | bush - methodist - barbara - houston - hw | 9 | 884_bush_methodist_barbara_houston |
| 885 | plague - bubonic - madagascar - flea - locust | 9 | 885_plague_bubonic_madagascar_flea |
| 886 | simon - sinitta - bergantz - factor - niall | 9 | 886_simon_sinitta_bergantz_factor |
| 887 | affirmative - admission - diversity - supreme - fisher | 9 | 887_affirmative_admission_diversity_supreme |
| 888 | minot - levee - river - souris - dakota | 9 | 888_minot_levee_river_souris |
| 889 | jihadi - slee - bary - souaan - syria | 9 | 889_jihadi_slee_bary_souaan |
| 890 | lightning - bolt - struck - thunderstorm - strike | 9 | 890_lightning_bolt_struck_thunderstorm |
| 891 | farrow - allen - woody - ronan - dylan | 9 | 891_farrow_allen_woody_ronan |
| 892 | brothel - trafficking - slavery - bello - juju | 9 | 892_brothel_trafficking_slavery_bello |
| 893 | weightlifting - powerlifting - lifting - lift - bronwyn | 9 | 893_weightlifting_powerlifting_lifting_lift |
| 894 | stanford - stanfords - wasendorf - antigua - financier | 9 | 894_stanford_stanfords_wasendorf_antigua |
| 895 | soca - private - list - vaz - greymans | 9 | 895_soca_private_list_vaz |
| 896 | weight - diet - watchers - calorie - dieter | 9 | 896_weight_diet_watchers_calorie |
| 897 | boj - yen - japans - japan - ghosn | 9 | 897_boj_yen_japans_japan |
| 898 | bercow - mills - commons - clerk - parliamentary | 9 | 898_bercow_mills_commons_clerk |
| 899 | cave - limbert - bridge - doong - cavers | 9 | 899_cave_limbert_bridge_doong |
| 900 | monteith - cory - glee - monteiths - lea | 9 | 900_monteith_cory_glee_monteiths |
| 901 | dangi - kulkarni - guinness - mruga - tall | 9 | 901_dangi_kulkarni_guinness_mruga |
| 902 | sony - yen - sonys - hirai - electronics | 9 | 902_sony_yen_sonys_hirai |
| 903 | bat - owl - chaffinch - bird - puffin | 9 | 903_bat_owl_chaffinch_bird |
| 904 | yelland - wyverstone - cregan - firearm - weapon | 9 | 904_yelland_wyverstone_cregan_firearm |
| 905 | radiation - mobile - phone - cancer - tawkon | 9 | 905_radiation_mobile_phone_cancer |
| 906 | windslowe - silicone - aderotimi - injection - glue | 9 | 906_windslowe_silicone_aderotimi_injection |
| 907 | ice - kungur - harbin - trolltunga - cave | 9 | 907_ice_kungur_harbin_trolltunga |
| 908 | driver - driving - speeding - redspeed - motorist | 9 | 908_driver_driving_speeding_redspeed |
| 909 | berlin - nobel - prize - obama - opcw | 9 | 909_berlin_nobel_prize_obama |
| 910 | harrismoore - harrismoores - colton - barefoot - bandit | 9 | 910_harrismoore_harrismoores_colton_barefoot |
| 911 | napoleon - waterloo - bonaparte - napoleons - wellington | 9 | 911_napoleon_waterloo_bonaparte_napoleons |
| 912 | dimon - jpmorgan - bonus - bank - mf | 9 | 912_dimon_jpmorgan_bonus_bank |
| 913 | phubbing - checking - email - lunch - phone | 9 | 913_phubbing_checking_email_lunch |
| 914 | darwin - springthorpe - lehan - tepper - fraser | 9 | 914_darwin_springthorpe_lehan_tepper |
| 915 | cort - nursery - stowe - tudur - methley | 8 | 915_cort_nursery_stowe_tudur |
| 916 | massage - bains - yeoh - pytlarz - abusin | 8 | 916_massage_bains_yeoh_pytlarz |
| 917 | nuisance - landline - calls - text - bt6500 | 8 | 917_nuisance_landline_calls_text |
| 918 | sopa - piracy - pipa - internet - reddit | 8 | 918_sopa_piracy_pipa_internet |
| 919 | peterson - savio - stacy - savios - petersons | 8 | 919_peterson_savio_stacy_savios |
| 920 | aig - bailout - bonus - fannie - lending | 8 | 920_aig_bailout_bonus_fannie |
| 921 | lamma - ferry - hong - kong - vessel | 8 | 921_lamma_ferry_hong_kong |
| 922 | ramsay - hutcheson - chef - randle - tana | 8 | 922_ramsay_hutcheson_chef_randle |
| 923 | meth - methamphetamine - crystal - drug - breaking | 8 | 923_meth_methamphetamine_crystal_drug |
| 924 | perry - perrys - texas - governor - oops | 8 | 924_perry_perrys_texas_governor |
| 925 | lunar - moon - rover - rabbit - moons | 8 | 925_lunar_moon_rover_rabbit |
| 926 | avon - careerbuildercom - volunteering - earn - job | 8 | 926_avon_careerbuildercom_volunteering_earn |
| 927 | vick - vicks - falcons - nfl - dogfighting | 8 | 927_vick_vicks_falcons_nfl |
| 928 | polish - kaczynski - katyn - poland - kaczynskis | 8 | 928_polish_kaczynski_katyn_poland |
| 929 | search - warrantless - cell - fakhoury - phone | 8 | 929_search_warrantless_cell_fakhoury |
| 930 | goldman - sean - brazilian - bruna - custody | 8 | 930_goldman_sean_brazilian_bruna |
| 931 | circus - performer - ringling - barnum - providence | 8 | 931_circus_performer_ringling_barnum |
| 932 | greaves - saviours - organist - maureen - sheffield | 8 | 932_greaves_saviours_organist_maureen |
| 933 | suleman - octuplets - nadya - kamrava - octomom | 8 | 933_suleman_octuplets_nadya_kamrava |
| 934 | pumpkin - grower - hedge - neale - record | 8 | 934_pumpkin_grower_hedge_neale |
| 935 | shafilea - ahmed - shafia - mevish - badiuzzaman | 8 | 935_shafilea_ahmed_shafia_mevish |
| 936 | contostavlos - tulisa - varey - glc - morgan | 8 | 936_contostavlos_tulisa_varey_glc |
| 937 | xinhua - kui - li - chinese - zhou | 8 | 937_xinhua_kui_li_chinese |
| 938 | stress - cft - meditation - anxiety - depression | 8 | 938_stress_cft_meditation_anxiety |
| 939 | maoist - chhattisgarh - singh - raipur - maoists | 8 | 939_maoist_chhattisgarh_singh_raipur |
| 940 | tyrell - william - kendall - spedding - nsw | 8 | 940_tyrell_william_kendall_spedding |
| 941 | valle - valles - gilberto - hise - kidnap | 8 | 941_valle_valles_gilberto_hise |
| 942 | bunker - silo - underground - missile - dring | 8 | 942_bunker_silo_underground_missile |
| 943 | nhs - redundancy - payoffs - rehired - redundant | 8 | 943_nhs_redundancy_payoffs_rehired |
| 944 | blackwater - waxman - iraqi - xe - iraq | 8 | 944_blackwater_waxman_iraqi_xe |
| 945 | detroit - grosse - flint - detroits - pointe | 8 | 945_detroit_grosse_flint_detroits |
| 946 | thames - water - meter - sewage - aman | 8 | 946_thames_water_meter_sewage |
| 947 | archbishop - welby - canterbury - church - christianity | 8 | 947_archbishop_welby_canterbury_church |
| 948 | clews - caviar - vegan - wagyu - sevruga | 8 | 948_clews_caviar_vegan_wagyu |
| 949 | santa - elf - christmas - santas - lily | 7 | 949_santa_elf_christmas_santas |
| 950 | poverty - appalachia - census - appalachian - bureau | 7 | 950_poverty_appalachia_census_appalachian |
| 951 | waddington - dean - bishop - archbishop - church | 7 | 951_waddington_dean_bishop_archbishop |
| 952 | psy - gangnam - psys - snoop - youtube | 7 | 952_psy_gangnam_psys_snoop |
| 953 | methylamphetamine - methamphetamine - kilogram - australian - meth | 7 | 953_methylamphetamine_methamphetamine_kilogram_australian |
| 954 | savernake - beesley - earl - trustee - farndale | 7 | 954_savernake_beesley_earl_trustee |
| 955 | confinement - solitary - bullock - church - melton | 7 | 955_confinement_solitary_bullock_church |
| 956 | tia - hazell - tias - bridger - sharp | 7 | 956_tia_hazell_tias_bridger |
| 957 | compensation - poliuscurran - teacher - aciro - academies | 7 | 957_compensation_poliuscurran_teacher_aciro |
| 958 | theatre - lambros - cinema - purim - dangour | 7 | 958_theatre_lambros_cinema_purim |
| 959 | belfort - belforts - copperfield - wolf - oakmont | 7 | 959_belfort_belforts_copperfield_wolf |
| 960 | goode - francis - girl - boy - yarlington | 7 | 960_goode_francis_girl_boy |
| 961 | stevens - benghazi - libya - libyan - embassy | 7 | 961_stevens_benghazi_libya_libyan |
| 962 | boness - cv - tooth - pluss - job | 7 | 962_boness_cv_tooth_pluss |
| 963 | malaria - parasite - eradication - vaccine - mosquito | 7 | 963_malaria_parasite_eradication_vaccine |
| 964 | space - chinas - aerobatic - china - shenzhou10 | 7 | 964_space_chinas_aerobatic_china |
| 965 | cycle - superhighway - railway - route - london | 7 | 965_cycle_superhighway_railway_route |
| 966 | corset - penny - ruffinelli - hips - goddiva | 7 | 966_corset_penny_ruffinelli_hips |
| 967 | alkhansa - raqqa - islamic - hrw - suha | 7 | 967_alkhansa_raqqa_islamic_hrw |
| 968 | taveras - reyes - alvarado - leopoldo - quintanilla | 7 | 968_taveras_reyes_alvarado_leopoldo |
| 969 | bulb - leds - paddle - edisons - bulbs | 7 | 969_bulb_leds_paddle_edisons |
| 970 | goodman - goodmans - wilsons - hutchins - polo | 7 | 970_goodman_goodmans_wilsons_hutchins |
| 971 | worboys - ruse - ryn - kelcher - nbv | 7 | 971_worboys_ruse_ryn_kelcher |
| 972 | humanpowered - aircraft - rotor - skyprowler - efan | 7 | 972_humanpowered_aircraft_rotor_skyprowler |
| 973 | elvis - presley - graceland - presleys - elviss | 7 | 973_elvis_presley_graceland_presleys |
| 974 | council - tax - councils - pickles - allowance | 7 | 974_council_tax_councils_pickles |
| 975 | ferrante - cyanide - klein - creatine - spears | 7 | 975_ferrante_cyanide_klein_creatine |
| 976 | translation - hawaiian - language - donaghy - translate | 7 | 976_translation_hawaiian_language_donaghy |
| 977 | ear - sherrie - cartilage - surgery - charlotte | 7 | 977_ear_sherrie_cartilage_surgery |
| 978 | rizzo - salary - bell - spaccia - city | 7 | 978_rizzo_salary_bell_spaccia |
| 979 | berlin - aretz - freeman - 1989 - german | 7 | 979_berlin_aretz_freeman_1989 |
| 980 | pothole - council - road - potholes - lane | 7 | 980_pothole_council_road_potholes |
| 981 | podesta - bush - rating - percent - poll | 7 | 981_podesta_bush_rating_percent |
| 982 | sata - mutharika - banda - malawi - zambian | 7 | 982_sata_mutharika_banda_malawi |
| 983 | grimm - grimms - scotto - durand - congressman | 7 | 983_grimm_grimms_scotto_durand |
| 984 | barge - sancoff - google - susitna - borough | 7 | 984_barge_sancoff_google_susitna |
| 985 | draper - dobson - gentles - car - webb | 7 | 985_draper_dobson_gentles_car |
| 986 | fawcett - oneal - farrah - fawcetts - warhol | 7 | 986_fawcett_oneal_farrah_fawcetts |
| 987 | diamond - beers - diamonds - cullinan - mine | 7 | 987_diamond_beers_diamonds_cullinan |
| 988 | koralewski - pilkington - blackwell - care - siobhan | 7 | 988_koralewski_pilkington_blackwell_care |
| 989 | hundley - bennett - jonah - rickey - shein | 7 | 989_hundley_bennett_jonah_rickey |
| 990 | demi - ashton - demis - grammer - bure | 7 | 990_demi_ashton_demis_grammer |
| 991 | christmas - wanner - johann - mulled - market | 7 | 991_christmas_wanner_johann_mulled |
| 992 | connelly - barker - tracey - haringey - owen | 6 | 992_connelly_barker_tracey_haringey |
| 993 | righttowork - union - unionism - embryonic - lamberth | 6 | 993_righttowork_union_unionism_embryonic |
| 994 | mcchrystal - fallon - petraeus - mcchrystals - gates | 6 | 994_mcchrystal_fallon_petraeus_mcchrystals |
| 995 | nellore - railway - delhi - carriage - india | 6 | 995_nellore_railway_delhi_carriage |
| 996 | ramos - blasio - bratton - liu - wenjian | 6 | 996_ramos_blasio_bratton_liu |
| 997 | biofuel - biofuels - ethanol - biochar - gasoline | 6 | 997_biofuel_biofuels_ethanol_biochar |
| 998 | graphene - hydrogen - atom - nanodiamonds - membrane | 6 | 998_graphene_hydrogen_atom_nanodiamonds |
| 999 | rico - gang - puerto - homicide - crime | 6 | 999_rico_gang_puerto_homicide |
| 1000 | nigeria - jonathan - nigerias - election - nigerians | 6 | 1000_nigeria_jonathan_nigerias_election |
| 1001 | oxygen - membrane - rock - noffke - bacteria | 6 | 1001_oxygen_membrane_rock_noffke |
| 1002 | mouse - gavage - soulard - crueltyfree - foie | 6 | 1002_mouse_gavage_soulard_crueltyfree |
| 1003 | idol - lopez - kinane - finale - franco | 6 | 1003_idol_lopez_kinane_finale |
| 1004 | iraqi - almaliki - iraq - iraqs - alabadi | 6 | 1004_iraqi_almaliki_iraq_iraqs |
| 1005 | colwell - flag - australian - indigenous - racist | 6 | 1005_colwell_flag_australian_indigenous |
| 1006 | feonyx - kaydon - sullock - nappy - cot | 6 | 1006_feonyx_kaydon_sullock_nappy |
| 1007 | belhadj - gaddafi - straw - blair - rendition | 6 | 1007_belhadj_gaddafi_straw_blair |
| 1008 | leave - parental - childcare - schroeders - scatty | 6 | 1008_leave_parental_childcare_schroeders |
| 1009 | wormhole - photon - teleportation - relativity - warp | 6 | 1009_wormhole_photon_teleportation_relativity |
| 1010 | nelson - knight - pearson - fired - knights | 6 | 1010_nelson_knight_pearson_fired |
| 1011 | toilet - jaeduck - symonds - toiletshaped - warriors | 6 | 1011_toilet_jaeduck_symonds_toiletshaped |
| 1012 | calorie - sauce - calories - chopped - protein | 6 | 1012_calorie_sauce_calories_chopped |
| 1013 | arin - jorgensen - christine - gender - tiffany | 6 | 1013_arin_jorgensen_christine_gender |
| 1014 | lewinsky - clinton - clintons - monica - lewinskys | 6 | 1014_lewinsky_clinton_clintons_monica |
| 1015 | bundy - cliven - bundys - cattle - rancher | 6 | 1015_bundy_cliven_bundys_cattle |
| 1016 | trentadue - bombing - kaczynski - mcveigh - oklahoma | 6 | 1016_trentadue_bombing_kaczynski_mcveigh |
| 1017 | bell - malden - clarence - hamzahs - sennett | 6 | 1017_bell_malden_clarence_hamzahs |
| 1018 | valentines - cuddle - hess - romantic - cuddling | 6 | 1018_valentines_cuddle_hess_romantic |
| 1019 | hockey - eruzione - suter - selanne - finland | 6 | 1019_hockey_eruzione_suter_selanne |
| 1020 | maps - apple - apples - forstall - google | 6 | 1020_maps_apple_apples_forstall |
| 1021 | bonfire - lewes - effigy - fawkes - sussex | 6 | 1021_bonfire_lewes_effigy_fawkes |
| 1022 | cunningham - josie - boob - 4800 - wannabe | 6 | 1022_cunningham_josie_boob_4800 |
| 1023 | khloe - timeless - syms - kardashian - thickes | 6 | 1023_khloe_timeless_syms_kardashian |
| 1024 | malika - hinksman - daisyray - bath - geoff | 6 | 1024_malika_hinksman_daisyray_bath |
| 1025 | saffron - ruhleben - horticultural - garden - flower | 5 | 1025_saffron_ruhleben_horticultural_garden |
| 1026 | grant - tina - hong - hugh - landon | 5 | 1026_grant_tina_hong_hugh |
| 1027 | deeds - gus - creigh - virginia - millboro | 5 | 1027_deeds_gus_creigh_virginia |
| 1028 | miliband - naftali - syria - labour - bso | 5 | 1028_miliband_naftali_syria_labour |
| 1029 | palin - johnston - bristol - palins - king | 5 | 1029_palin_johnston_bristol_palins |
| 1030 | teen - pregnancy - birth - kearney - unplanned | 5 | 1030_teen_pregnancy_birth_kearney |
| 1031 | ocd - scrupulosity - abramowitz - hoarding - paperchase | 5 | 1031_ocd_scrupulosity_abramowitz_hoarding |
| 1032 | drinkdrive - grandparent - fatality - safer - rural | 5 | 1032_drinkdrive_grandparent_fatality_safer |
| 1033 | pirabahuran - shopkeeper - hennessy - robber - hennesy | 5 | 1033_pirabahuran_shopkeeper_hennessy_robber |
| 1034 | hair - shaunni - parsons - kiera - mackenzie | 5 | 1034_hair_shaunni_parsons_kiera |
| 1035 | plastic - ocean - plastiki - gyre - trash | 5 | 1035_plastic_ocean_plastiki_gyre |
| 1036 | nicholson - nielsen - connie - award - outstanding | 5 | 1036_nicholson_nielsen_connie_award |
| 1037 | google - search - ruling - engine - results | 5 | 1037_google_search_ruling_engine |
| 1038 | iraqi - troop - iraq - iraqs - almaliki | 5 | 1038_iraqi_troop_iraq_iraqs |
| 1039 | pozonsky - oswald - meth - fresno - walmart | 5 | 1039_pozonsky_oswald_meth_fresno |
| 1040 | watts - crib - kumpula - baby - kimpton | 5 | 1040_watts_crib_kumpula_baby |
| 1041 | amnesty - shetty - human - aceves - gualinga | 5 | 1041_amnesty_shetty_human_aceves |
| 1042 | naji - guantanamo - alshibh - aamer - aamers | 5 | 1042_naji_guantanamo_alshibh_aamer |
| 1043 | sanford - sanfords - jenny - carolina - governors | 5 | 1043_sanford_sanfords_jenny_carolina |
| 1044 | vallance - lowe - jesperson - chang - wiggins | 5 | 1044_vallance_lowe_jesperson_chang |
| 1045 | collapse - building - construction - collapsed - canacona | 5 | 1045_collapse_building_construction_collapsed |
| 1046 | gift - rachel - christmas - present - gates | 5 | 1046_gift_rachel_christmas_present |
| 1047 | tudor - henry - asprey - tudors - wolf | 5 | 1047_tudor_henry_asprey_tudors |
| 1048 | breastfeeding - formula - milk - breastfeed - breastfed | 5 | 1048_breastfeeding_formula_milk_breastfeed |
| 1049 | costume - headdress - macklemore - stereotype - halloween | 5 | 1049_costume_headdress_macklemore_stereotype |
| 1050 | iraq - isil - syria - strike - air | 5 | 1050_iraq_isil_syria_strike |
| 1051 | contraception - pill - sterilization - armor - gauchat | 5 | 1051_contraception_pill_sterilization_armor |
| 1052 | chao - gigi - eav - sean - cecil | 5 | 1052_chao_gigi_eav_sean |
| 1053 | exercise - activity - walking - brisk - heart | 5 | 1053_exercise_activity_walking_brisk |
| 1054 | hygiene - tesco - earwig - store - supermarket | 5 | 1054_hygiene_tesco_earwig_store |
| 1055 | shakespeare - shakespeares - rylance - folio - shylock | 5 | 1055_shakespeare_shakespeares_rylance_folio |
| 1056 | hightower - nace - demaio - zilge - dingess | 5 | 1056_hightower_nace_demaio_zilge |
| 1057 | dubai - mulla - calcutt - robert - uae | 5 | 1057_dubai_mulla_calcutt_robert |
| 1058 | wikileaks - assange - classified - gates - manning | 5 | 1058_wikileaks_assange_classified_gates |
| 1059 | dubai - burj - 971 - dubais - al | 5 | 1059_dubai_burj_971_dubais |
| 1060 | schall - cdu - germany - npd - german | 5 | 1060_schall_cdu_germany_npd |
| 1061 | allotment - garden - brana - plot - cummins | 5 | 1061_allotment_garden_brana_plot |
| 1062 | lopez - hood - fort - owens - muntean | 5 | 1062_lopez_hood_fort_owens |
| 1063 | hansen - conklin - hansens - kk - waikiki | 5 | 1063_hansen_conklin_hansens_kk |
| 1064 | splashlight - beauty - profile - poppy - colour | 5 | 1064_splashlight_beauty_profile_poppy |
| 1065 | ua - slough - thermal - outbuilding - council | 5 | 1065_ua_slough_thermal_outbuilding |
| 1066 | hribal - murrain - hribals - kalmbach - thomassey | 5 | 1066_hribal_murrain_hribals_kalmbach |
| 1067 | ktf - kermit - muppets - sesame - muppet | 5 | 1067_ktf_kermit_muppets_sesame |
| 1068 | beamond - haslemere - halliwell - goddenedwards - beamonds | 5 | 1068_beamond_haslemere_halliwell_goddenedwards |
| 1069 | tinder - dating - rad - ignighter - antidate | 5 | 1069_tinder_dating_rad_ignighter |
| 1070 | robot - robots - autonomous - 1939 - artificial | 5 | 1070_robot_robots_autonomous_1939 |
| 1071 | markoff - brisman - conley - markoffs - julissa | 5 | 1071_markoff_brisman_conley_markoffs |
| 1072 | port - almasry - cairo - alahly - egypts | 5 | 1072_port_almasry_cairo_alahly |
| 1073 | casquejo - tower - skyscraper - basher - wtc | 5 | 1073_casquejo_tower_skyscraper_basher |
| 1074 | lodger - grownup - average - class - rent | 5 | 1074_lodger_grownup_average_class |
| 1075 | palace - royal - buckingham - queen - savoir | 5 | 1075_palace_royal_buckingham_queen |
| 1076 | tokyo - shibuya - marunouchi - akihabara - nakagin | 5 | 1076_tokyo_shibuya_marunouchi_akihabara |
| 1077 | food - trussell - bank - voucher - rayner | 5 | 1077_food_trussell_bank_voucher |
| 1078 | berg - bush - jenna - wilson - library | 5 | 1078_berg_bush_jenna_wilson |
| 1079 | hmrc - owe - accountants - chartered - tax | 5 | 1079_hmrc_owe_accountants_chartered |
| 1080 | disability - benefit - dla - claimant - allowance | 5 | 1080_disability_benefit_dla_claimant |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.12
| 106,286 | [
[
-0.06451416015625,
-0.0213165283203125,
0.006626129150390625,
0.0013437271118164062,
-0.0185699462890625,
0.01474761962890625,
0.0024967193603515625,
-0.0175628662109375,
0.055084228515625,
0.020904541015625,
-0.04345703125,
-0.05291748046875,
-0.054931640625,
... |
adrinanou/distilbert-base-uncased-finetuned-squac | 2023-08-24T13:56:43.000Z | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:IIC/SQUAC",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | adrinanou | null | null | adrinanou/distilbert-base-uncased-finetuned-squac | 0 | 2 | transformers | 2023-08-21T15:58:07 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squac
results: []
datasets:
- IIC/SQUAC
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squac
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an Extractive QuestionAnswering dataset(SQuAC).
It achieves the following results on the evaluation set:
- Loss: 2.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0135 | 1.0 | 842 | 2.2648 |
| 2.1735 | 2.0 | 1684 | 2.1575 |
| 1.9441 | 3.0 | 2526 | 2.0973 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3 | 1,499 | [
[
-0.033966064453125,
-0.052764892578125,
0.0106353759765625,
0.0175628662109375,
-0.024658203125,
-0.0261383056640625,
-0.0103302001953125,
-0.005054473876953125,
0.0018329620361328125,
0.0191802978515625,
-0.050689697265625,
-0.039459228515625,
-0.0523681640625,... |
TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML | 2023-09-27T13:01:43.000Z | [
"transformers",
"llama",
"llama-2",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML | 6 | 2 | transformers | 2023-08-21T19:38:25 | ---
license: llama2
tags:
- llama
- llama-2
model_name: L2 MythoMax 22B Instruct Falseblock
inference: false
model_creator: grimpep
model_link: https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock
model_type: llama
quantized_by: TheBloke
base_model: grimpep/L2-MythoMax22b-instruct-Falseblock
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# L2 MythoMax 22B Instruct Falseblock - GGML
- Model creator: [grimpep](https://huggingface.co/grimpep)
- Original model: [L2 MythoMax 22B Instruct Falseblock](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock)
## Description
This repo contains GGML format model files for [grimpep's L2 MythoMax 22B Instruct Falseblock](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML)
* [grimpep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q2_K.bin) | q2_K | 2 | 9.22 GB| 11.72 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 9.46 GB| 11.96 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 10.57 GB| 13.07 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 11.61 GB| 14.11 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_0.bin) | q4_0 | 4 | 12.34 GB| 14.84 GB | Original quant method, 4-bit. |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 12.34 GB| 14.84 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 13.18 GB| 15.68 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_1.bin) | q4_1 | 4 | 13.69 GB| 16.19 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_0.bin) | q5_0 | 5 | 15.04 GB| 17.54 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 15.04 GB| 17.54 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 15.47 GB| 17.97 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q5_1.bin) | q5_1 | 5 | 16.39 GB| 18.89 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q6_K.bin) | q6_K | 6 | 17.91 GB| 20.41 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [l2-mythomax22b-instruct-Falseblock.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML/blob/main/l2-mythomax22b-instruct-Falseblock.ggmlv3.q8_0.bin) | q8_0 | 8 | 23.14 GB| 25.64 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m l2-mythomax22b-instruct-Falseblock.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: grimpep's L2 MythoMax 22B Instruct Falseblock
[Experimental model]
This model is an experiment using the frankenstein script from
https://huggingface.co/chargoddard/llama2-22b
BLOCK_DIAGONAL = False
Using:
https://huggingface.co/Gryphe/MythoMax-L2-13b
+
Then used https://huggingface.co/upstage/llama-30b-instruct-2048
as donor model.
It used 160GB of system ram to merge these models, they merge fast without swap.
For prompt template and model information see [MythoMax-L2](https://huggingface.co/Gryphe/MythoMax-L2-13b).
| 15,602 | [
[
-0.040313720703125,
-0.0631103515625,
0.0282745361328125,
0.013427734375,
-0.0258331298828125,
-0.00640869140625,
-0.005405426025390625,
-0.04632568359375,
0.0238037109375,
0.006671905517578125,
-0.04620361328125,
-0.041717529296875,
-0.038604736328125,
-0.0... |
manuu01/DeBERTa-SeagullStory | 2023-08-21T21:26:03.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | manuu01 | null | null | manuu01/DeBERTa-SeagullStory | 0 | 2 | transformers | 2023-08-21T21:20:37 | [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) further trained on a
custom dataset for SeagullStory space | 175 | [
[
-0.0176849365234375,
-0.0192413330078125,
0.016204833984375,
0.02301025390625,
-0.00795745849609375,
-0.002819061279296875,
0.03656005859375,
-0.01313018798828125,
0.06982421875,
0.033782958984375,
-0.033477783203125,
-0.043914794921875,
-0.017730712890625,
... |
TheBloke/Llama2-22B-GPLATTY-GGML | 2023-09-27T13:01:44.000Z | [
"transformers",
"llama",
"llama-2",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | TheBloke | null | null | TheBloke/Llama2-22B-GPLATTY-GGML | 7 | 2 | transformers | 2023-08-21T21:23:09 | ---
license: llama2
tags:
- llama
- llama-2
model_name: Llama2 22B GPLATTY
inference: false
model_creator: grimpep
model_link: https://huggingface.co/grimpep/llama2-22B-GPLATTY
model_type: llama
quantized_by: TheBloke
base_model: grimpep/llama2-22B-GPLATTY
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 22B GPLATTY - GGML
- Model creator: [grimpep](https://huggingface.co/grimpep)
- Original model: [Llama2 22B GPLATTY](https://huggingface.co/grimpep/llama2-22B-GPLATTY)
## Description
This repo contains GGML format model files for [grimpep's Llama2 22B GPLATTY](https://huggingface.co/grimpep/llama2-22B-GPLATTY).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML)
* [grimpep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/grimpep/llama2-22B-GPLATTY)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-22b-gplatty.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q2_K.bin) | q2_K | 2 | 9.22 GB| 11.72 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [llama2-22b-gplatty.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 9.46 GB| 11.96 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [llama2-22b-gplatty.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 10.57 GB| 13.07 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-gplatty.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 11.61 GB| 14.11 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [llama2-22b-gplatty.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q4_0.bin) | q4_0 | 4 | 12.34 GB| 14.84 GB | Original quant method, 4-bit. |
| [llama2-22b-gplatty.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 12.34 GB| 14.84 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [llama2-22b-gplatty.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 13.18 GB| 15.68 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [llama2-22b-gplatty.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q4_1.bin) | q4_1 | 4 | 13.69 GB| 16.19 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [llama2-22b-gplatty.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q5_0.bin) | q5_0 | 5 | 15.04 GB| 17.54 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [llama2-22b-gplatty.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 15.04 GB| 17.54 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [llama2-22b-gplatty.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 15.47 GB| 17.97 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [llama2-22b-gplatty.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q5_1.bin) | q5_1 | 5 | 16.39 GB| 18.89 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [llama2-22b-gplatty.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q6_K.bin) | q6_K | 6 | 17.91 GB| 20.41 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [llama2-22b-gplatty.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama2-22B-GPLATTY-GGML/blob/main/llama2-22b-gplatty.ggmlv3.q8_0.bin) | q8_0 | 8 | 23.14 GB| 25.64 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m llama2-22b-gplatty.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: grimpep's Llama2 22B GPLATTY
This model is an experiment using the frankenstein script from
https://huggingface.co/chargoddard/llama2-22b
I merged an already merged 13B Llama v2 model
GUACHAT + KIMIKO = https://huggingface.co/grimpep/13Bv2-llama-modelmerge
Then used https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16
as donor model.
It used 160GB of system ram to merge these models, they merge fast without swap.
No strange behaviour noticed when generating with this model (maybe more hallucinations than usual), but note that this is still all experimental.
This model might be more (didn't finetune it) "ethically" aligned than https://huggingface.co/grimpep/llama2-22b-wizard_vicuna
| 14,854 | [
[
-0.03857421875,
-0.060333251953125,
0.030242919921875,
0.0190582275390625,
-0.0280303955078125,
-0.0028057098388671875,
-0.005435943603515625,
-0.04974365234375,
0.0258026123046875,
0.00604248046875,
-0.044189453125,
-0.044158935546875,
-0.03814697265625,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.