--- base_model: theprint/Zeth-Gemma3-4B library_name: peft pipeline_tag: text-generation language: en license: apache-2.0 tags: - lora - sft - transformers - trl - unsloth - fine-tuned datasets: - theprint/Gentle-Pushback-8.5k-alpaca --- # Genuine-Zeth-4B A fine-tuned Zeth Gemma 3 4B model, which is already tuned for 'pragmatic empathy', fine tuned further for more engaging conversation without sycofant responses. ## Model Details This model is a fine-tuned version of theprint/Zeth-Gemma3-4B using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training. - **Developed by:** theprint - **Model type:** Causal Language Model (Fine-tuned with LoRA) - **Language:** en - **License:** apache-2.0 - **Base model:** theprint/Zeth-Gemma3-4B - **Fine-tuning method:** LoRA with rank 128 ## Intended Use Brainstorming, idea development, general conversation ## GGUF Quantized Versions Quantized GGUF versions are available in the [theprint/Genuine-Zeth-4B-GGUF](https://huggingface.co/theprint/Genuine-Zeth-4B-GGUF) repo. - `Genuine-Zeth-4B-f16.gguf` (8688.3 MB) - 16-bit float (original precision, largest file) - `Genuine-Zeth-4B-q3_k_m.gguf` (2276.3 MB) - 3-bit quantization (medium quality) - `Genuine-Zeth-4B-q4_k_m.gguf` (2734.6 MB) - 4-bit quantization (medium, recommended for most use cases) - `Genuine-Zeth-4B-q5_k_m.gguf` (3138.7 MB) - 5-bit quantization (medium, good quality) - `Genuine-Zeth-4B-q6_k.gguf` (3568.1 MB) - 6-bit quantization (high quality) - `Genuine-Zeth-4B-q8_0.gguf` (4619.2 MB) - 8-bit quantization (very high quality) ## Training Details ### Training Data This data set was created to limit sycofancy in language models and encouraging the models to (gently) push back and call out bad ideas. - **Dataset:** theprint/Gentle-Pushback-8.5k-alpaca - **Format:** alpaca ### Training Procedure - **Training epochs:** 2 - **LoRA rank:** 128 - **Learning rate:** 0.0001 - **Batch size:** 6 - **Framework:** Unsloth + transformers + PEFT - **Hardware:** NVIDIA RTX 5090 ## Usage ```python from unsloth import FastLanguageModel import torch # Load model and tokenizer model, tokenizer = FastLanguageModel.from_pretrained( model_name="theprint/Genuine-Zeth-4B", max_seq_length=4096, dtype=None, load_in_4bit=True, ) # Enable inference mode FastLanguageModel.for_inference(model) # Example usage inputs = tokenizer(["Your prompt here"], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Alternative Usage (Standard Transformers) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "theprint/Genuine-Zeth-4B", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("theprint/Genuine-Zeth-4B") # Example usage messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your question here"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) print(response) ``` ### Using with llama.cpp ```bash # Download a quantized version (q4_k_m recommended for most use cases) wget https://huggingface.co/theprint/Genuine-Zeth-4B/resolve/main/gguf/Genuine-Zeth-4B-q4_k_m.gguf # Run with llama.cpp ./llama.cpp/main -m Genuine-Zeth-4B-q4_k_m.gguf -p "Your prompt here" -n 256 ``` ## Limitations May provide incorrect information. ## Citation If you use this model, please cite: ```bibtex @misc{genuine_zeth_4b, title={Genuine-Zeth-4B: Fine-tuned theprint/Zeth-Gemma3-4B}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/Genuine-Zeth-4B} } ``` ## Acknowledgments - Base model: [theprint/Zeth-Gemma3-4B](https://huggingface.co/theprint/Zeth-Gemma3-4B) - Training dataset: [theprint/Gentle-Pushback-8.5k-alpaca](https://huggingface.co/datasets/theprint/Gentle-Pushback-8.5k-alpaca) - Fine-tuning framework: [Unsloth](https://github.com/unslothai/unsloth) - Quantization: [llama.cpp](https://github.com/ggerganov/llama.cpp)