File size: 2,234 Bytes
208cc43
 
 
 
 
 
 
 
a9d7515
 
 
 
 
208cc43
 
fa57495
208cc43
fa57495
208cc43
fa57495
 
 
208cc43
fa57495
208cc43
fa57495
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
208cc43
fa57495
 
 
 
 
 
 
 
3509d28
fa57495
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9d7515
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
base_model: google/gemma-3-1b-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-3-1b-it
- lora
- transformers
license: cc-by-4.0
datasets:
- Shlok307/Interview_questions
language:
- en
---

#  Gemma 3 Interview LoRA — 1B Instruct  

This model is a **QLoRA fine-tuned version** of **Gemma-3-1B-IT**, trained on a curated dataset of **5,002 interview-style Q&A samples** across:

- **Artificial Intelligence (AI)**  
- **General Programming**  
- **Web Development**  

The goal is to enhance Gemma-3 into a **technical interview assistant**, capable of:

- Generating domain-specific interview questions  
- Providing accurate, structured, exam-style answers  
- Explaining concepts clearly and concisely  
- Maintaining a professional and consistent interview tone  
---
##  Dataset  
The model was fine-tuned on a dataset containing 5,002 samples with the fields:
| Field | Description |
|-------|-------------|
| **domain** | AI, General Programming, Web Development |
| **question** | Interview question from that domain |
| **answer** | Ground-truth, explanation-style answer |

Each training row was converted into:

- Instruction:  
  `"Answer this <domain> interview question: <question>"`
- Response:  
  `"<answer>"`
---

##  Usage Example

### Python

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Shlok307/ai_interview-lora"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.float16
)

prompt = [
    {"role": "user", "content": "Answer this AI interview question: What is backpropagation?"}
]

input_ids = tokenizer.apply_chat_template(
    prompt,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

output = model.generate(
    input_ids,
    max_new_tokens=200,
    do_sample=True,
    temperature=0.7
)

print(tokenizer.decode(output[0], skip_special_tokens=True))
```

## Citation
```
@model{gemma3_interview_lora,
  title={Gemma 3 Interview LoRA — 1B IT},
  author={Shlok Talhar},
  year={2025},
  url={https://huggingface.co/Shlok307/gemma3-interview-lora}
}
```