satejh's picture
Upload README.md with huggingface_hub
37ce44b verified
metadata
license: gemma
base_model: google/gemma-2-2b-it
tags:
  - gemma2
  - fine-tuned
  - qlora
  - technical-assistant
  - aws
  - security
  - finance
language:
  - en
library_name: transformers
pipeline_tag: text-generation

gemma2-2b-technical-assistant

Fine-tuned Gemma 2 2B IT model for personalized technical assistance.

Model Description

This model is a QLoRA fine-tuned version of google/gemma-2-2b-it, specialized for:

  • AWS cloud security guidance
  • FastAPI/Python backend development
  • Finance application development
  • Kubernetes workload management
  • ISO 27001:2022 compliance

Training Details

  • Base Model: google/gemma-2-2b-it
  • Fine-tuning Method: QLoRA (4-bit quantization)
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • Training Epochs: 5
  • Hardware: Google Colab T4 GPU

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("satejh/gemma2-2b-technical-assistant")
tokenizer = AutoTokenizer.from_pretrained("satejh/gemma2-2b-technical-assistant")

prompt = "<start_of_turn>user\nWhat database should I use?<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Ollama

Download the GGUF file and Modelfile from this repo, then:

ollama create gemma2-2b-technical-assistant -f Modelfile
ollama run gemma2-2b-technical-assistant

Intended Use

This model is designed as a personalized technical assistant with:

  • Security-first approach
  • Read-only database interactions
  • Direct, actionable responses
  • AWS and Kubernetes expertise