How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf simonguest/Qwen3-1.7B-code-explainer:F16
# Run inference directly in the terminal:
llama-cli -hf simonguest/Qwen3-1.7B-code-explainer:F16
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf simonguest/Qwen3-1.7B-code-explainer:F16
# Run inference directly in the terminal:
llama-cli -hf simonguest/Qwen3-1.7B-code-explainer:F16
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf simonguest/Qwen3-1.7B-code-explainer:F16
# Run inference directly in the terminal:
./llama-cli -hf simonguest/Qwen3-1.7B-code-explainer:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf simonguest/Qwen3-1.7B-code-explainer:F16
# Run inference directly in the terminal:
./build/bin/llama-cli -hf simonguest/Qwen3-1.7B-code-explainer:F16
Use Docker
docker model run hf.co/simonguest/Qwen3-1.7B-code-explainer:F16
Quick Links

Qwen3-1.7B-code-explainer

Model Description

Fine-tuned from Qwen/Qwen3-1.7B using QLoRA (4-bit) with supervised fine-tuning.

Training Details

  • Dataset: simonguest/test-dataset
  • LoRA rank: 16, alpha: 32
  • Epochs: 3, Learning rate: 0.0002

Intended Use

This model is a test model used for the CS-394/594 class at DigiPen.

The model is designed to provide a summary explanation of a snippet of Python code, to be used in an IDE. This model takes a snippet of code (passed as the user prompt) and returns a two paragraph explanation of what the code does, including an analogy that helps students better understand how the code functions.

Limitations

This model is a single-turn model and has not been trained on support long, multi-turn conversations.

Downloads last month
7
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with simonguest/Qwen3-1.7B-code-explainer.

Model tree for simonguest/Qwen3-1.7B-code-explainer

Finetuned
Qwen/Qwen3-1.7B
Adapter
(490)
this model