How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="satejh/gemma2-2b-technical-assistant",
	filename="gemma2-2b-technical-assistant-Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

gemma2-2b-technical-assistant

Fine-tuned Gemma 2 2B IT model for personalized technical assistance.

Model Description

This model is a QLoRA fine-tuned version of google/gemma-2-2b-it, specialized for:

  • AWS cloud security guidance
  • FastAPI/Python backend development
  • Finance application development
  • Kubernetes workload management
  • ISO 27001:2022 compliance

Training Details

  • Base Model: google/gemma-2-2b-it
  • Fine-tuning Method: QLoRA (4-bit quantization)
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • Training Epochs: 5
  • Hardware: Google Colab T4 GPU

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("satejh/gemma2-2b-technical-assistant")
tokenizer = AutoTokenizer.from_pretrained("satejh/gemma2-2b-technical-assistant")

prompt = "<start_of_turn>user\nWhat database should I use?<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Ollama

Download the GGUF file and Modelfile from this repo, then:

ollama create gemma2-2b-technical-assistant -f Modelfile
ollama run gemma2-2b-technical-assistant

Intended Use

This model is designed as a personalized technical assistant with:

  • Security-first approach
  • Read-only database interactions
  • Direct, actionable responses
  • AWS and Kubernetes expertise
Downloads last month
23
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
Input a message to start chatting with satejh/gemma2-2b-technical-assistant.

Model tree for satejh/gemma2-2b-technical-assistant

Quantized
(177)
this model