Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,61 +1,65 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
- en
|
| 5 |
-
pipeline_tag: text-generation
|
| 6 |
-
tags:
|
| 7 |
-
- detoxify
|
| 8 |
-
- nano
|
| 9 |
-
- small
|
| 10 |
-
- vulgar
|
| 11 |
-
- curse
|
| 12 |
-
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
| 19 |
-
- `
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
##
|
| 22 |
-
- **Task**: detoxification of text, without changing the context of that text.
|
| 23 |
-
- **Hardware**: laptops/CPUs via `llama.cpp`; small GPUs with GGUF loaders.
|
| 24 |
-
- **Not for**: safety-critical or clinical use.
|
| 25 |
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
```bash
|
| 31 |
-
#
|
| 32 |
-
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
|
| 38 |
-
- **Format**: GGUF (quantized: **Q8_0**)
|
| 39 |
-
- **Architecture**: LlamaForCausalLM
|
| 40 |
-
- **Tokenizer**: (embedded in GGUF; if you use a custom tokenizer, document it here)
|
| 41 |
-
- **Context length**: (not explicitly extracted here; typical small models use 2048–4096 — fill if known)
|
| 42 |
-
- **Base model / provenance**: Fine-tuned from the Minibase Small Base model at minibase.ai.
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
##
|
| 47 |
-
- Toxicity detection can reflect dataset and annotation biases. Use with caution, especially on dialects and minority language varieties.
|
| 48 |
-
- Performance in languages other than English is likely reduced unless trained multi-lingually.
|
| 49 |
|
| 50 |
-
|
| 51 |
-
-
|
| 52 |
-
-
|
| 53 |
|
| 54 |
-
##
|
| 55 |
-
- **MIT**
|
| 56 |
|
| 57 |
-
|
| 58 |
-
-
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
-
-
|
|
|
|
| 1 |
+
# Detoxify-Small - GGUF Model Package
|
| 2 |
+
|
| 3 |
+
This package contains a GGUF (GPT-Generated Unified Format) model file and all necessary configuration files to run the model locally.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
+
## Model Information
|
| 6 |
+
|
| 7 |
+
- **Model Name**: Detoxify-Small
|
| 8 |
+
- **Base Model**:
|
| 9 |
+
- **Architecture**: LlamaForCausalLM
|
| 10 |
+
- **Context Window**: 1024 tokens
|
| 11 |
+
- **Format**: GGUF (optimized for local inference)
|
| 12 |
|
| 13 |
+
## Files Included
|
| 14 |
|
| 15 |
+
- `model.gguf` - The quantized model file
|
| 16 |
+
- `inference.lock.json` - Server configuration
|
| 17 |
+
- `model_info.json` - Model metadata
|
| 18 |
+
- `run_server.sh` - Script to start the inference server
|
| 19 |
+
- `README.md` - This file
|
| 20 |
+
- `USAGE.md` - Usage examples and instructions
|
| 21 |
|
| 22 |
+
## Quick Start
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
1. Make sure you have [llama.cpp](https://github.com/ggerganov/llama.cpp) installed
|
| 25 |
+
2. Run the provided script:
|
| 26 |
+
```bash
|
| 27 |
+
./run_server.sh
|
| 28 |
+
```
|
| 29 |
+
3. The server will start on http://127.0.0.1:8000
|
| 30 |
+
|
| 31 |
+
## Manual Setup
|
| 32 |
+
|
| 33 |
+
If you prefer to run manually:
|
| 34 |
|
| 35 |
```bash
|
| 36 |
+
# Start the server
|
| 37 |
+
llama-server \
|
| 38 |
+
-m model.gguf \
|
| 39 |
+
--host 127.0.0.1 \
|
| 40 |
+
--port 8000 \
|
| 41 |
+
--n-gpu-layers 0 \
|
| 42 |
+
--chat-template ""```
|
| 43 |
|
| 44 |
+
## API Usage
|
| 45 |
|
| 46 |
+
Once the server is running, you can make requests to:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
- **Health Check**: `GET http://127.0.0.1:8000/health`
|
| 49 |
+
- **Completion**: `POST http://127.0.0.1:8000/completion`
|
| 50 |
+
- **Tokenization**: `POST http://127.0.0.1:8000/tokenize`
|
| 51 |
|
| 52 |
+
## Requirements
|
|
|
|
|
|
|
| 53 |
|
| 54 |
+
- llama.cpp (latest version recommended)
|
| 55 |
+
- At least 8GB RAM (16GB recommended)
|
| 56 |
+
- For GPU acceleration: Metal (macOS), CUDA (Linux/Windows), or Vulkan
|
| 57 |
|
| 58 |
+
## Troubleshooting
|
|
|
|
| 59 |
|
| 60 |
+
- If you get memory errors, reduce `--n-gpu-layers` or use a smaller model
|
| 61 |
+
- For slower machines, try `--ctx-size 2048` to reduce context window
|
| 62 |
+
- Check `USAGE.md` for detailed examples and troubleshooting tips
|
| 63 |
|
| 64 |
+
---
|
| 65 |
+
Generated on 2025-09-17 20:07:11
|