Instructions to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor", filename="SmolLM2-360M.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Use Docker
docker model run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Ollama:
ollama run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- Unsloth Studio new
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
- Docker Model Runner
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Docker Model Runner:
docker model run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- Lemonade
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Run and chat with the model
lemonade run user.SmolLM2-360M-Assignment-Metadata-Extractor-Q4_K_M
List all available models
lemonade list
File size: 4,251 Bytes
1d97dd2 81fa335 1d97dd2 7f82ce5 a3ccfec 3bdea50 7f82ce5 0da34e3 7f82ce5 3bdea50 7f82ce5 1d97dd2 7f82ce5 1d97dd2 7f82ce5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | ---
tags:
- gguf
- llama.cpp
- unsloth
- smollm2
- json-extraction
- data-extraction
language:
- en
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-360M
---
# ๐ SmolLM2-360M-Assignment-Metadata-Extractor (GGUF)
This is a highly specialized, lightweight (360M parameter) model fine-tuned specifically to extract student metadata from chaotic, noisy assignment text and output it as strictly formatted JSON.
It was finetuned and converted to 4-bit GGUF format using [Unsloth](https://github.com/unslothai/unsloth) for maximum CPU/GPU efficiency and rapid deployment via Ollama or `llama.cpp`.
**Github Repo**: *https://github.com/nmdra/Assignment-Metadata-Extractor*
## ๐ Model Capabilities
Unlike generic LLMs, this model has been purposefully overfit on a highly mutated dataset to act as a **Zero-Shot Data Extractor**. It excels at:
- **Noise Filtering:** Completely ignoring conversational filler, apologies, word counts, formatting artifacts, and academic instructions.
- **Handling Chaos:** Robust against typos (e.g., "Stuednt No"), varied capitalization, and unpredictable line breaks.
- **Strict JSON Output:** Trained to output ONLY a valid JSON object with zero conversational preamble (no "Here is the JSON...").
### Expected Output Schema
The model will exclusively output data in the following JSON structure:
```json
{
"student_number": "...",
"student_name": "...",
"assignment_number": "..."
}
````
-----
## ๐ Deployment & Usage
Because this model was trained with a specific instruction template, it performs best when wrapped in an environment that enforces **Temperature 0** and matches the training prompt.
### Method 1: Using Ollama (Recommended for standard usage)
Create a `Modelfile` with the following configuration to enforce the correct prompt template and prevent creativity:
```text
FROM hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
TEMPLATE """### Instruction:
Extract student info as JSON from the following text.
### Input:
{{ .Prompt }}
### Response:
"""
SYSTEM """
You are a precise student assignment data extractor.
Output ONLY a valid JSON object. No explanation. No extra text. No markdown.
Always output exactly: {"student_number":"...","student_name":"...","assignment_number":"..."}
"""
PARAMETER temperature 0
PARAMETER stop "}"
```
**Build and Run:**
```bash
ollama create json-extractor -f Modelfile
ollama run json-extractor "Course: CS101 \n Stuednt No=20210088 \n Full Nme: Nimal Silva \n HW No.-03 \n Please grade fairly!"
```
### Method 2: Python using Outlines (For bulletproof JSON validation)
For production environments where `json.JSONDecodeError` is entirely unacceptable, use this model with `outlines` and `llama-cpp-python` to structurally constrain the output tokens.
```python
import outlines
from pydantic import BaseModel
class StudentExtraction(BaseModel):
student_number: str
student_name: str
assignment_number: str
# Load the GGUF model
model = outlines.models.llamacpp(
"hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M",
device="cpu" # or "cuda"
)
# Constrain the generator to the Pydantic schema
generator = outlines.generate.json(model, StudentExtraction)
# Format the prompt exactly as trained
prompt = (
"### Instruction:\nExtract student info as JSON from the following text.\n\n"
"### Input:\nStu. ID: 20210088 | Full Name: Nimal Silva | HW-3\n\n"
"### Response:\n"
)
result = generator(prompt)
print(result.model_dump_json())
```
-----
## ๐ง Training Details
- **Base Model:** `HuggingFaceTB/SmolLM2-360M`
- **Dataset:** 1,250 highly varied synthetic examples containing realistic human errors, markdown noise, and distractor text.
- **Epochs:** 5 (Optimized to achieve a loss \< \~0.40 to prevent hallucinations).
- **Framework:** Trained efficiently using LoRA (Low-Rank Adaptation) via Unsloth.
- **Quantization:** Exported to `Q4_K_M` GGUF format to reduce memory footprint to \~270MB.
---
This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |