File size: 2,766 Bytes
bda6ac6 8aacd19 5a13b18 8bdabf8 5a13b18 8bdabf8 bda6ac6 1ef2b14 8bdabf8 8aacd19 8bdabf8 8aacd19 8bdabf8 8aacd19 8bdabf8 1ef2b14 8bdabf8 8aacd19 1ef2b14 8bdabf8 8aacd19 8bdabf8 fbbccdc 8bdabf8 8aacd19 8bdabf8 8aacd19 8bdabf8 8aacd19 8bdabf8 8aacd19 8bdabf8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ---
base_model: google/gemma-2b-it
language: en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- precision-grounding
- document-qa
- zero-hallucination
- legal-tech
- technical-analysis
---
# ๐ Solvrays Llm - High Precision Document Analyst
\n## ๐ Overview
This model is a specialized fine-tuning of **google/gemma-2b-it**, engineered for **Zero-Hallucination Document Retrieval**. It has been optimized to handle complex, domain-specific documents (Technical, Legal, or Architectural) with strict adherence to provided context.
\n### ๐ Primary Design Objectives
- **Factual Integrity**: Programmed to prioritize 'Not Documented' over speculating.
- **Contextual Continuity**: Overlap-aware training prevents information loss across page boundaries.
- **Domain Versatility**: Seamlessly switches between technical and non-technical document styles.
\n## ๐ป Professional Usage (Grounded Inference)
To achieve the trained precision level, utilize the following code implementation:
\n```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = 'solvrays/solvrays-llm'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', torch_dtype=torch.bfloat16)
# Universal Grounding Template
instruction = 'Analyze your internal knowledge base and provide a precise, factual response based strictly on the documentation you have been trained on. If the information is not documented, state that it is not documented.'
query = 'What are the main infrastructure requirements?'
prompt = (f'### Instruction: {instruction}\n'
f'### Knowledge Context: {query}\n'
f'### Verified Response:')
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False, repetition_penalty=1.5)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('### Verified Response:')[-1].strip())
```
\n## ๐ Technical Specifications
| Parameter | Configuration |
| :--- | :--- |
| Base Model | google/gemma-2b-it |
| Fine-tuning Method | QLoRA (4-bit quantization) |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 32 |
| Training Epochs | 5 |
| Context Strategy | 512 tokens with 128-token overlap |
\n## โ ๏ธ Risks & Limitations
- **Context Window**: Strictly limited to the fine-tuned block size (512 tokens). For longer multi-page queries, RAG (Retrieval Augmented Generation) is recommended.
- **Bias**: The model reflects the biases of the provided training documentation.
- **Accuracy**: Always verify critical technical numbers against the original source.
\n---
**Architected and Fine-tuned by Bibek Lama Singtan** |