How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Praneeth/code-gemma-2b-it to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Praneeth/code-gemma-2b-it to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for Praneeth/code-gemma-2b-it to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="Praneeth/code-gemma-2b-it",
    max_seq_length=2048,
)
Quick Links

Code-Gemma-2B

Description

Code-Gemma was finetuned (1k steps) on the CodeAlpaca-20k dataset using the unsloth library to enhance the Gemma-2B-it model.

Usage

Below we share some code snippets on how to get quickly started with running the model.

!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
if major_version >= 8:
    # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
    !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else:
    # Use this for older GPUs (V100, Tesla T4, RTX 20xx)
    !pip install --no-deps xformers trl peft accelerate bitsandbytes
pass

Running the model on a GPU using different precisions

  • Using torch.float16

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Praneeth/code-gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("Praneeth/code-gemma-2b-it", device_map="auto", torch_dtype=torch.float16)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=256,)
print(tokenizer.decode(outputs[0]))
Downloads last month
116
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Praneeth/code-gemma-2b-it

Adapters
2 models

Dataset used to train Praneeth/code-gemma-2b-it

Spaces using Praneeth/code-gemma-2b-it 15