Solvrays Gemma 2B Finetuned Pdf
Model Summary
This model is a fine-tuned version of google/gemma-2b, specifically optimized for high-context understanding of technical documents processed via a Vision-Aided Document Engine. It leverages the same research and technology used in Google's Gemini models, condensed into a lightweight 2B parameter architecture suitable for edge deployment.
Description
Gemma is a family of lightweight, state-of-the-art open models built by Google. This specific variant has been adapted using QLoRA (4-bit Quantization) to internalize specialized knowledge from a custom PDF corpus, preserving layout nuances through a hybrid Vision-OCR pipeline.
Context Length
The model maintains its native training context length of 8,192 tokens, making it well-suited for long-form document summarization and complex reasoning tasks.
Usage
Running the model on a GPU
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
model_id = "singtan/solvrays-gemma-2b-finetuned-pdf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
quantization_config=BitsAndBytesConfig(load_in_4bit=True)
)
input_text = "Summarize the key findings of the provided documentation."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Dataset
The model was fine-tuned on a diverse set of documents sourced from /content/. The data was processed using a Hybrid Vision-Aided Engine:
- Digital Extraction: Native text stream recovery for high-fidelity digital PDFs.
- Vision Fallback: Tesseract-based OCR for scanned or image-heavy documents.
- Chunking: Sliding-window strategy with 512 tokens and 64 token overlap.
Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Batch Size | 1 |
| Learning Rate | 0.0001 |
| Optimizer | AdamW (8-bit) |
| Hardware | cuda |
| Quantization | 4-bit (bitsandbytes) |
Performance Metrics
- Final Training Loss: N/A
- Total Runtime: N/A seconds
Ethics and Safety
This model inherits the safety principles of the Gemma family. Evaluations were conducted across categories including child safety, content safety, and representational harms. Users are encouraged to implement their own application-specific safety safeguards as outlined in the Responsible Generative AI Toolkit.
Limitations
- Factual Accuracy: Like all LLMs, the model may hallucinate if prompted outside its fine-tuned context.
- Language: Optimized primarily for English-language documents.
- Common Sense: While strong, the model relies on statistical patterns and may lack reasoning in extreme edge cases.
Authors
Fine-tuned by Bibek | Base model by Google.
Model tree for singtan/solvrays-gemma-2b-finetuned-pdf
Base model
google/gemma-2b