gemma3-27b-ft-q8 / README.md
kurai-sx
Add YAML metadata to README.md
02cffb1
|
raw
history blame
1.2 kB
metadata
license: mit
language:
  - en

Gemma3-27B-FT-Q8

This repository contains a fine-tuned version of the Gemma-3-27B model, quantized to 8-bit precision.

Installation

To use this model, you need to install the transformers and torch libraries.

pip install transformers torch

Usage

You can use the model for text generation. Here is an example of how to load the model and generate text:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("snxtyle/gemma3-27b-ft-q8")
model = AutoModelForCausalLM.from_pretrained("snxtyle/gemma3-27b-ft-q8")

prompt = "Where is DPIP failing?'"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate text
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

This is a basic example. You can find more information about the generate method and its parameters in the Hugging Face documentation.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Copyright (c) 2025 snxtyle