Instructions to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor", filename="SmolLM2-360M.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Use Docker
docker model run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Ollama:
ollama run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- Unsloth Studio new
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor to start chatting
- Docker Model Runner
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Docker Model Runner:
docker model run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
- Lemonade
How to use nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
Run and chat with the model
lemonade run user.SmolLM2-360M-Assignment-Metadata-Extractor-Q4_K_M
List all available models
lemonade list
π SmolLM2-360M-Assignment-Metadata-Extractor (GGUF)
This is a highly specialized, lightweight (360M parameter) model fine-tuned specifically to extract student metadata from chaotic, noisy assignment text and output it as strictly formatted JSON.
It was finetuned and converted to 4-bit GGUF format using Unsloth for maximum CPU/GPU efficiency and rapid deployment via Ollama or llama.cpp.
Github Repo: https://github.com/nmdra/Assignment-Metadata-Extractor
π Model Capabilities
Unlike generic LLMs, this model has been purposefully overfit on a highly mutated dataset to act as a Zero-Shot Data Extractor. It excels at:
- Noise Filtering: Completely ignoring conversational filler, apologies, word counts, formatting artifacts, and academic instructions.
- Handling Chaos: Robust against typos (e.g., "Stuednt No"), varied capitalization, and unpredictable line breaks.
- Strict JSON Output: Trained to output ONLY a valid JSON object with zero conversational preamble (no "Here is the JSON...").
Expected Output Schema
The model will exclusively output data in the following JSON structure:
{
"student_number": "...",
"student_name": "...",
"assignment_number": "..."
}
π Deployment & Usage
Because this model was trained with a specific instruction template, it performs best when wrapped in an environment that enforces Temperature 0 and matches the training prompt.
Method 1: Using Ollama (Recommended for standard usage)
Create a Modelfile with the following configuration to enforce the correct prompt template and prevent creativity:
FROM hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M
TEMPLATE """### Instruction:
Extract student info as JSON from the following text.
### Input:
{{ .Prompt }}
### Response:
"""
SYSTEM """
You are a precise student assignment data extractor.
Output ONLY a valid JSON object. No explanation. No extra text. No markdown.
Always output exactly: {"student_number":"...","student_name":"...","assignment_number":"..."}
"""
PARAMETER temperature 0
PARAMETER stop "}"
Build and Run:
ollama create json-extractor -f Modelfile
ollama run json-extractor "Course: CS101 \n Stuednt No=20210088 \n Full Nme: Nimal Silva \n HW No.-03 \n Please grade fairly!"
Method 2: Python using Outlines (For bulletproof JSON validation)
For production environments where json.JSONDecodeError is entirely unacceptable, use this model with outlines and llama-cpp-python to structurally constrain the output tokens.
import outlines
from pydantic import BaseModel
class StudentExtraction(BaseModel):
student_number: str
student_name: str
assignment_number: str
# Load the GGUF model
model = outlines.models.llamacpp(
"hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M",
device="cpu" # or "cuda"
)
# Constrain the generator to the Pydantic schema
generator = outlines.generate.json(model, StudentExtraction)
# Format the prompt exactly as trained
prompt = (
"### Instruction:\nExtract student info as JSON from the following text.\n\n"
"### Input:\nStu. ID: 20210088 | Full Name: Nimal Silva | HW-3\n\n"
"### Response:\n"
)
result = generator(prompt)
print(result.model_dump_json())
π§ Training Details
- Base Model:
HuggingFaceTB/SmolLM2-360M - Dataset: 1,250 highly varied synthetic examples containing realistic human errors, markdown noise, and distractor text.
- Epochs: 5 (Optimized to achieve a loss < ~0.40 to prevent hallucinations).
- Framework: Trained efficiently using LoRA (Low-Rank Adaptation) via Unsloth.
- Quantization: Exported to
Q4_K_MGGUF format to reduce memory footprint to ~270MB.
This was trained 2x faster with Unsloth
- Downloads last month
- 171
4-bit
Model tree for nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor
Base model
HuggingFaceTB/SmolLM2-360M
docker model run hf.co/nimendraai/SmolLM2-360M-Assignment-Metadata-Extractor:Q4_K_M