Spaces:
Running
Running
Update model_tools.md
Browse files- model_tools.md +50 -47
model_tools.md
CHANGED
|
@@ -1,47 +1,50 @@
|
|
| 1 |
-
---
|
| 2 |
-
title: Model Tools
|
| 3 |
-
emoji: 📚
|
| 4 |
-
colorFrom: pink
|
| 5 |
-
colorTo: yellow
|
| 6 |
-
sdk: static
|
| 7 |
-
pinned: false
|
| 8 |
-
---
|
| 9 |
-
|
| 10 |
-
# Model Tools by Naphula
|
| 11 |
-
Tools to enhance LLM quantizations and merging
|
| 12 |
-
|
| 13 |
-
# [
|
| 14 |
-
-
|
| 15 |
-
|
| 16 |
-
# [
|
| 17 |
-
- Converts
|
| 18 |
-
|
| 19 |
-
# [
|
| 20 |
-
- Converts
|
| 21 |
-
|
| 22 |
-
# [
|
| 23 |
-
-
|
| 24 |
-
|
| 25 |
-
# [
|
| 26 |
-
-
|
| 27 |
-
|
| 28 |
-
# [
|
| 29 |
-
-
|
| 30 |
-
|
| 31 |
-
# [
|
| 32 |
-
-
|
| 33 |
-
|
| 34 |
-
# [
|
| 35 |
-
-
|
| 36 |
-
|
| 37 |
-
# [
|
| 38 |
-
-
|
| 39 |
-
|
| 40 |
-
# [Markdown
|
| 41 |
-
-
|
| 42 |
-
|
| 43 |
-
# [
|
| 44 |
-
-
|
| 45 |
-
|
| 46 |
-
#
|
| 47 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Model Tools
|
| 3 |
+
emoji: 📚
|
| 4 |
+
colorFrom: pink
|
| 5 |
+
colorTo: yellow
|
| 6 |
+
sdk: static
|
| 7 |
+
pinned: false
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# Model Tools by Naphula
|
| 11 |
+
Tools to enhance LLM quantizations and merging
|
| 12 |
+
|
| 13 |
+
# [graph_v4.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v4.py)
|
| 14 |
+
- Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
|
| 15 |
+
|
| 16 |
+
# [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py)
|
| 17 |
+
- Converts FP32 to FP16 safetensors
|
| 18 |
+
|
| 19 |
+
# [textonly_ripper_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper_v2.py)
|
| 20 |
+
- Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at [textonly_ripper.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper.md)
|
| 21 |
+
|
| 22 |
+
# [vocab_resizer.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_resizer.py)
|
| 23 |
+
- Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that `tokenizer.model` must be manually copied into the `/fixed/` folder.
|
| 24 |
+
|
| 25 |
+
# [lm_head_remover.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/lm_head_remover.py)
|
| 26 |
+
- This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others.
|
| 27 |
+
|
| 28 |
+
# [model_index_json_generator.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator.py)
|
| 29 |
+
- Generates a missing `model.safetensors.index.json` file. Useful for cases where safetensors may have been sharded at the wrong size.
|
| 30 |
+
|
| 31 |
+
# [folder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder_content_combiner_anyfiles.py)
|
| 32 |
+
- Combines all files in the script's current directory into a single output file, sorted alphabetically.
|
| 33 |
+
|
| 34 |
+
# [GGUF Repo Suite](https://huggingface.co/spaces/Naphula/gguf-repo-suite)
|
| 35 |
+
- Create and quantize Hugging Face models
|
| 36 |
+
|
| 37 |
+
# [Failed Experiment gguf_to_safetensors_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gguf_to_safetensors_v2.py)
|
| 38 |
+
- Unsuccessful attempt by Gemini to patch the gguf_to_safetensors script. Missing json files are hard to reconstruct. Also see [safetensors_meta_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/safetensors_meta_ripper_v1.py) and [tokenizer_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokenizer_ripper_v1.py)
|
| 39 |
+
|
| 40 |
+
# [Markdown Viewer](https://huggingface.co/spaces/Naphula/Portable_Offline_Markdown_Viewer)
|
| 41 |
+
- Portable Offline Markdown Viewer
|
| 42 |
+
|
| 43 |
+
# [Markdown to SMF](https://huggingface.co/spaces/Naphula/model_tools/blob/main/md_to_smf.py)
|
| 44 |
+
- Converts a Markdown string to an SMF-compatible BBCode string. Not perfect—sometimes misses double bold tags.
|
| 45 |
+
|
| 46 |
+
# [Quant Clone](https://github.com/electroglyph/quant_clone)
|
| 47 |
+
- A tool which allows you to recreate UD quants such as Q8_K_XL. Examples: [Mistral 24B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Mistral-Small-3.2-24B-Instruct-2506-UD-Q8_K_XL_UD.txt), [Mistral 7B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Warlock-7B-v2-Q8_K_XL.txt)
|
| 48 |
+
|
| 49 |
+
# [Text Analysis Suite v1.5](https://huggingface.co/spaces/Naphula/TAS_1.5)
|
| 50 |
+
- Analyze text files with advanced metrics
|