|
|
--- |
|
|
base_model: unsloth/gemma-3-270m-it-unsloth-bnb-4bit |
|
|
library_name: peft |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- base_model:adapter:unsloth/gemma-3-270m-it-unsloth-bnb-4bit |
|
|
- lora |
|
|
- sft |
|
|
- transformers |
|
|
- trl |
|
|
- unsloth |
|
|
- vcet |
|
|
- domain-specific |
|
|
license: apache-2.0 |
|
|
metrics: |
|
|
- accuracy |
|
|
--- |
|
|
|
|
|
|
|
|
# Model Card for gemma-270-it-vcet-lora |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
This model is a domain-specific conversational AI fine-tuned on custom data related to VCET College, Madurai. |
|
|
Built on top of unsloth/gemma-3-270m-it-unsloth-bnb-4bit, it uses LoRA and PEFT for efficient adaptation. |
|
|
The model is designed to answer queries about campus life, academics, departments, events, and administrative processes at VCET. |
|
|
|
|
|
|
|
|
|
|
|
- **Developed by:** SandeepCodez. |
|
|
- **Funded by [optional]:** Self-funded. |
|
|
- **Shared by [optional]:** SandeepCodez |
|
|
- **Model type:** Causal Language Model (Text Generation). |
|
|
- **Language(s) (NLP):** English (with contextual Tamil understanding). |
|
|
- **License:** Apache 2.0. |
|
|
- **Finetuned from model [optional]:** unsloth/gemma-3-270m-it-unsloth-bnb-4bit |
|
|
|
|
|
### Model Sources [optional] |
|
|
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
|
|
- **Repository:** https://github.com/SandeepCodez |
|
|
- **Paper [optional]:** [More Information Needed] |
|
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
Answering VCET-related questions |
|
|
|
|
|
Assisting students with academic and campus queries |
|
|
|
|
|
Automating college FAQs |
|
|
|
|
|
Supporting chatbot integration for VCET platforms |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Downstream Use [optional] |
|
|
|
|
|
Integration into college ERP systems |
|
|
|
|
|
Enhancing virtual assistants for student support |
|
|
|
|
|
Embedding in mobile apps or websites |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
General-purpose text generation outside VCET context |
|
|
|
|
|
Legal, medical, or financial advice |
|
|
|
|
|
High-stakes decision-making without human oversight |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
May reflect institutional bias from VCET sources |
|
|
|
|
|
Limited generalization outside VCET domain |
|
|
|
|
|
Not suitable for sensitive or critical applications |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
Use in supervised environments |
|
|
|
|
|
Periodic updates to dataset recommended |
|
|
|
|
|
Human validation for factual accuracy advised |
|
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_id = "SandeepCodez/gemma-270-it-vcet-lora" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
|
|
|
|
inputs = tokenizer("What are the placement statistics for VCET Madurai?", return_tensors="pt") |
|
|
outputs = model.generate(**inputs) |
|
|
print(tokenizer.decode(outputs[0])) |
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
Custom dataset created by the developer, including: |
|
|
|
|
|
VCET brochures |
|
|
|
|
|
Departmental documents |
|
|
|
|
|
Student interviews |
|
|
|
|
|
Campus FAQs |
|
|
|
|
|
Event archives |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Training Procedure |
|
|
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
#### Preprocessing [optional] |
|
|
|
|
|
Cleaned and structured into JSONL format |
|
|
|
|
|
Tokenized using Gemma tokenizer |
|
|
|
|
|
Filtered for relevance and clarity |
|
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
|
|
Training regime: bf16 mixed precision |
|
|
|
|
|
Epochs: 3 |
|
|
|
|
|
Batch Size: 16 |
|
|
|
|
|
Learning Rate: 2e-4 |
|
|
|
|
|
Frameworks: PEFT 0.17.1, TRL, Unsloth |
|
|
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
|
|
Training Time: ~3 hours |
|
|
|
|
|
Dataset Size: ~10,000 samples |
|
|
|
|
|
Model Size: 270M parameters |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
|
|
#### Testing Data |
|
|
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Factors |
|
|
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Metrics |
|
|
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Results |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Summary |
|
|
|
|
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Environmental Impact |
|
|
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
|
|
- **Hardware Type:** [More Information Needed] |
|
|
- **Hours used:** [More Information Needed] |
|
|
- **Cloud Provider:** [More Information Needed] |
|
|
- **Compute Region:** [More Information Needed] |
|
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
|
|
## Technical Specifications [optional] |
|
|
|
|
|
### Model Architecture and Objective |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Compute Infrastructure |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Hardware |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Software |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
**APA:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Glossary [optional] |
|
|
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## More Information [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Model Card Authors [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Model Card Contact |
|
|
|
|
|
[More Information Needed] |
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.17.1 |