Spaces:
Running
Running
metadata
title: Model Tools
emoji: 📚
colorFrom: pink
colorTo: yellow
sdk: static
pinned: false
Model Tools by Naphula
Tools to enhance LLM quantizations and merging
fp32_to_fp16.py
- Converts FP32 to FP16 safetensors
textonly_ripper_v2.py
- Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at textonly_ripper.md
vocab_resizer.py
- Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that
tokenizer.modelmust be manually copied into the/fixed/folder.
lm_head_remover.py
- This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others.
model_index_json_generator.py
- Generates a missing
model.safetensors.index.jsonfile. Useful for cases where safetensors may have been sharded at the wrong size.
folder_content_combiner_anyfiles.py
- Combines all files in the script's current directory into a single output file, sorted alphabetically.
GGUF Repo Suite
- Create and quantize Hugging Face models
Failed Experiment gguf_to_safetensors_v2.py
- Unsuccessful attempt by Gemini to patch the gguf_to_safetensors script. Missing json files are hard to reconstruct. Also see safetensors_meta_ripper_v1.py and tokenizer_ripper_v1.py
Markdown Viewer
- Portable Offline Markdown Viewer
Markdown to SMF
- Converts a Markdown string to an SMF-compatible BBCode string. Not perfect—sometimes misses double bold tags.
Quant Clone
- A tool which allows you to recreate UD quants such as Q8_K_XL. Examples: Mistral 24B, Mistral 7B
Text Analysis Suite
- Pending reupload