|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: coderop12/gemma2b-nirf-lookup-2025 |
|
|
tags: |
|
|
- gguf |
|
|
- quantized |
|
|
- gemma |
|
|
- nirf |
|
|
- education |
|
|
- ranking |
|
|
- indian-universities |
|
|
- text-generation |
|
|
library_name: gguf |
|
|
model_name: gemma2b-nirf-lookup-gguf |
|
|
inference: false |
|
|
model_creator: coderop12 |
|
|
model_type: gemma |
|
|
quantization: f16 |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: text-generation |
|
|
widget: |
|
|
- text: "What is NIRF ranking methodology?" |
|
|
example_title: "NIRF Methodology" |
|
|
- text: "Which are the top engineering colleges in NIRF 2024?" |
|
|
example_title: "Top Engineering Colleges" |
|
|
- text: "How are universities ranked in India?" |
|
|
example_title: "University Rankings" |
|
|
--- |
|
|
|
|
|
# gemma2b-nirf-lookup-gguf |
|
|
|
|
|
This is a GGUF conversion of [coderop12/gemma2b-nirf-lookup-2025](https://huggingface.co/coderop12/gemma2b-nirf-lookup-2025). |
|
|
|
|
|
## Model Details |
|
|
- **Original Model**: coderop12/gemma2b-nirf-lookup-2025 |
|
|
- **Format**: GGUF (F16 precision) |
|
|
- **File Size**: ~4.9 GB |
|
|
- **Architecture**: Gemma 2B |
|
|
- **Specialization**: NIRF (National Institutional Ranking Framework) lookup and ranking queries |
|
|
|
|
|
## Usage |
|
|
|
|
|
### With llama.cpp |
|
|
```bash |
|
|
./llama-cli -m gemma2b-nirf-lookup-gguf.gguf -p "What is the NIRF ranking methodology?" |
|
|
``` |
|
|
|
|
|
### With Python (llama-cpp-python) |
|
|
```python |
|
|
from llama_cpp import Llama |
|
|
|
|
|
llm = Llama(model_path="gemma2b-nirf-lookup-gguf.gguf") |
|
|
response = llm("What are the top NIRF ranked engineering colleges?") |
|
|
print(response['choices'][0]['text']) |
|
|
``` |
|
|
|
|
|
### With Ollama |
|
|
```bash |
|
|
# First, create a Modelfile |
|
|
echo 'FROM ./gemma2b-nirf-lookup-gguf.gguf' > Modelfile |
|
|
ollama create gemma2b-nirf-lookup-gguf -f Modelfile |
|
|
ollama run gemma2b-nirf-lookup-gguf "Explain NIRF ranking parameters" |
|
|
``` |
|
|
|
|
|
## Model Capabilities |
|
|
This model is specifically fine-tuned for: |
|
|
- NIRF ranking information and queries |
|
|
- Indian higher education institutional data |
|
|
- University and college ranking explanations |
|
|
- Educational policy and framework questions |
|
|
|
|
|
## Technical Details |
|
|
- **Quantization**: F16 (16-bit floating point) |
|
|
- **Context Length**: 2048 tokens |
|
|
- **License**: Follow original model license terms |
|
|
- **Converted using**: llama.cpp conversion tools |
|
|
|
|
|
## Original Model License |
|
|
Please refer to the original model repository for license information. |
|
|
|