instella-3b-fp16 / README.md
shrayesu's picture
Upload folder using huggingface_hub
e23b857 verified

Instella 3B FP16

This is a FP16 version of amd/Instella-3B-Instruct.

Quantization: FP16 (half precision) Use case: CPU or GPU inference with reduced memory usage.

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("username/instella-3b-fp16")
model = AutoModelForCausalLM.from_pretrained("username/instella-3b-fp16", torch_dtype="float16")

prompt = "Explain quantum computing in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=100)[0]))