Add model card
Browse files
README.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
# gemma2b-nirf-lookup-gguf
|
| 3 |
+
|
| 4 |
+
This is a GGUF conversion of [coderop12/gemma2b-nirf-lookup-2025](https://huggingface.co/coderop12/gemma2b-nirf-lookup-2025).
|
| 5 |
+
|
| 6 |
+
## Model Details
|
| 7 |
+
- **Original Model**: coderop12/gemma2b-nirf-lookup-2025
|
| 8 |
+
- **Format**: GGUF (F16 precision)
|
| 9 |
+
- **File Size**: ~4.9 GB
|
| 10 |
+
- **Architecture**: Gemma 2B
|
| 11 |
+
- **Specialization**: NIRF (National Institutional Ranking Framework) lookup and ranking queries
|
| 12 |
+
|
| 13 |
+
## Usage
|
| 14 |
+
|
| 15 |
+
### With llama.cpp
|
| 16 |
+
```bash
|
| 17 |
+
./llama-cli -m gemma2b-nirf-lookup-gguf.gguf -p "What is the NIRF ranking methodology?"
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
### With Python (llama-cpp-python)
|
| 21 |
+
```python
|
| 22 |
+
from llama_cpp import Llama
|
| 23 |
+
|
| 24 |
+
llm = Llama(model_path="gemma2b-nirf-lookup-gguf.gguf")
|
| 25 |
+
response = llm("What are the top NIRF ranked engineering colleges?")
|
| 26 |
+
print(response['choices'][0]['text'])
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### With Ollama
|
| 30 |
+
```bash
|
| 31 |
+
# First, create a Modelfile
|
| 32 |
+
echo 'FROM ./gemma2b-nirf-lookup-gguf.gguf' > Modelfile
|
| 33 |
+
ollama create gemma2b-nirf-lookup-gguf -f Modelfile
|
| 34 |
+
ollama run gemma2b-nirf-lookup-gguf "Explain NIRF ranking parameters"
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
## Model Capabilities
|
| 38 |
+
This model is specifically fine-tuned for:
|
| 39 |
+
- NIRF ranking information and queries
|
| 40 |
+
- Indian higher education institutional data
|
| 41 |
+
- University and college ranking explanations
|
| 42 |
+
- Educational policy and framework questions
|
| 43 |
+
|
| 44 |
+
## Technical Details
|
| 45 |
+
- **Quantization**: F16 (16-bit floating point)
|
| 46 |
+
- **Context Length**: 2048 tokens
|
| 47 |
+
- **License**: Follow original model license terms
|
| 48 |
+
- **Converted using**: llama.cpp conversion tools
|
| 49 |
+
|
| 50 |
+
## Original Model License
|
| 51 |
+
Please refer to the original model repository for license information.
|