Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,61 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
tags:
|
| 7 |
+
- detoxify
|
| 8 |
+
- nano
|
| 9 |
+
- small
|
| 10 |
+
- vulgar
|
| 11 |
+
- curse
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Detoxify-Language-Small (GGUF, Q8_0)
|
| 15 |
+
|
| 16 |
+
**TL;DR**: A compact detoxification model in **GGUF (Q8_0)** format for fast CPU inference via `llama.cpp` and compatible runtimes. File size: ~138.1 MiB.
|
| 17 |
+
|
| 18 |
+
## Files
|
| 19 |
+
- `small-base_Detoxify-Small_high_Q8_0.gguf` (SHA256: `98945b1291812eb85275fbf2bf60ff92522e7b80026c8301ff43127fdd52826e`; size: 144810464 bytes)
|
| 20 |
+
|
| 21 |
+
## Intended use
|
| 22 |
+
- **Task**: detoxification of text, without changing the context of that text.
|
| 23 |
+
- **Hardware**: laptops/CPUs via `llama.cpp`; small GPUs with GGUF loaders.
|
| 24 |
+
- **Not for**: safety-critical or clinical use.
|
| 25 |
+
|
| 26 |
+
## How to run (llama.cpp)
|
| 27 |
+
> Replace the `-p` prompt with your own text. For classification, you can use a simple prompt like:
|
| 28 |
+
> `"Classify the following text as TOXIC or NON-TOXIC: <text>"`
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
# Build llama.cpp once (see upstream instructions), then:
|
| 32 |
+
./main -m small-base_Detoxify-Small_high_Q8_0.gguf -p "Classify the following text as TOXIC or NON-TOXIC: I hate you."
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
If your downstream workflow expects logits/labels directly, consider adapting a small wrapper that maps generated tokens to labels (example Python script to be added).
|
| 36 |
+
|
| 37 |
+
## Model details
|
| 38 |
+
- **Format**: GGUF (quantized: **Q8_0**)
|
| 39 |
+
- **Architecture**: LlamaForCausalLM
|
| 40 |
+
- **Tokenizer**: (embedded in GGUF; if you use a custom tokenizer, document it here)
|
| 41 |
+
- **Context length**: (not explicitly extracted here; typical small models use 2048–4096 — fill if known)
|
| 42 |
+
- **Base model / provenance**: Fine-tuned from the Minibase Small Base model at minibase.ai.
|
| 43 |
+
|
| 44 |
+
> If you can share the base model and training data (even briefly), add a short bullet list here to improve discoverability.
|
| 45 |
+
|
| 46 |
+
## Training Data
|
| 47 |
+
- Toxicity detection can reflect dataset and annotation biases. Use with caution, especially on dialects and minority language varieties.
|
| 48 |
+
- Performance in languages other than English is likely reduced unless trained multi-lingually.
|
| 49 |
+
|
| 50 |
+
## Limitations & bias
|
| 51 |
+
- Toxicity detection can reflect dataset and annotation biases. Use with caution, especially on dialects and minority language varieties.
|
| 52 |
+
- Performance in languages other than English is likely reduced unless trained multi-lingually.
|
| 53 |
+
|
| 54 |
+
## License
|
| 55 |
+
- **MIT**
|
| 56 |
+
|
| 57 |
+
## Checksums
|
| 58 |
+
- `small-base_Detoxify-Small_high_Q8_0.gguf` — `SHA256: 98945b1291812eb85275fbf2bf60ff92522e7b80026c8301ff43127fdd52826e`
|
| 59 |
+
|
| 60 |
+
## Changelog
|
| 61 |
+
- Initial upload.
|