Instructions to use AlSamCur123/OSS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use AlSamCur123/OSS with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AlSamCur123/OSS", filename="gpt-oss-20b.MXFP4.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use AlSamCur123/OSS with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf AlSamCur123/OSS # Run inference directly in the terminal: llama-cli -hf AlSamCur123/OSS
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf AlSamCur123/OSS # Run inference directly in the terminal: llama-cli -hf AlSamCur123/OSS
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf AlSamCur123/OSS # Run inference directly in the terminal: ./llama-cli -hf AlSamCur123/OSS
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf AlSamCur123/OSS # Run inference directly in the terminal: ./build/bin/llama-cli -hf AlSamCur123/OSS
Use Docker
docker model run hf.co/AlSamCur123/OSS
- LM Studio
- Jan
- Ollama
How to use AlSamCur123/OSS with Ollama:
ollama run hf.co/AlSamCur123/OSS
- Unsloth Studio new
How to use AlSamCur123/OSS with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for AlSamCur123/OSS to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for AlSamCur123/OSS to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for AlSamCur123/OSS to start chatting
- Pi new
How to use AlSamCur123/OSS with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf AlSamCur123/OSS
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "AlSamCur123/OSS" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use AlSamCur123/OSS with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf AlSamCur123/OSS
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default AlSamCur123/OSS
Run Hermes
hermes
- Docker Model Runner
How to use AlSamCur123/OSS with Docker Model Runner:
docker model run hf.co/AlSamCur123/OSS
- Lemonade
How to use AlSamCur123/OSS with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull AlSamCur123/OSS
Run and chat with the model
lemonade run user.OSS-{{QUANT_TAG}}List all available models
lemonade list
| FROM gpt-oss-20b.MXFP4.gguf | |
| TEMPLATE """<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI. | |
| Knowledge cutoff: 2024-06 | |
| Current date: {{ currentDate }} | |
| {{- if and .IsThinkSet .Think (ne .ThinkLevel "") }} | |
| Reasoning: {{ .ThinkLevel }} | |
| {{- else if or (not .IsThinkSet) (and .IsThinkSet .Think) }} | |
| Reasoning: medium | |
| {{- end }} | |
| {{- $hasNonBuiltinTools := false }} | |
| {{- if .Tools -}} | |
| {{- $hasBrowserSearch := false }} | |
| {{- $hasBrowserOpen := false }} | |
| {{- $hasBrowserFind := false }} | |
| {{- $hasPython := false }} | |
| {{- range .Tools }} | |
| {{- if eq .Function.Name "browser.search" -}}{{- $hasBrowserSearch = true -}} | |
| {{- else if eq .Function.Name "browser.open" -}}{{- $hasBrowserOpen = true -}} | |
| {{- else if eq .Function.Name "browser.find" -}}{{- $hasBrowserFind = true -}} | |
| {{- else if eq .Function.Name "python" -}}{{- $hasPython = true -}} | |
| {{- else }}{{ $hasNonBuiltinTools = true -}} | |
| {{- end }} | |
| {{- end }} | |
| {{- if or $hasBrowserSearch $hasBrowserOpen $hasBrowserFind $hasPython }} | |
| # Tools | |
| {{- if or $hasBrowserSearch $hasBrowserOpen $hasBrowserFind }} | |
| ## browser | |
| // Tool for browsing. | |
| // The `cursor` appears in brackets before each browsing display: `[{cursor}]`. | |
| // Cite information from the tool using the following format: | |
| // `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`. | |
| // Do not quote more than 10 words directly from the tool output. | |
| // sources=web (default: web) | |
| namespace browser { | |
| {{- if $hasBrowserSearch }} | |
| // Searches for information related to `query` and displays `topn` results. | |
| type search = (_: { | |
| query: string, | |
| topn?: number, // default: 10 | |
| source?: string, | |
| }) => any; | |
| {{- end }} | |
| {{- if $hasBrowserOpen }} | |
| // Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines. | |
| // Valid link ids are displayed with the formatting: `【{id}†.*】`. | |
| // If `cursor` is not provided, the most recent page is implied. | |
| // If `id` is a string, it is treated as a fully qualified URL associated with `source`. | |
| // If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available. | |
| // Use this function without `id` to scroll to a new location of an opened page. | |
| type open = (_: { | |
| id?: number | string, // default: -1 | |
| cursor?: number, // default: -1 | |
| loc?: number, // default: -1 | |
| num_lines?: number, // default: -1 | |
| view_source?: boolean, // default: false | |
| source?: string, | |
| }) => any; | |
| {{- end }} | |
| {{- if $hasBrowserFind }} | |
| // Finds exact matches of `pattern` in the current page, or the page given by `cursor`. | |
| type find = (_: { | |
| pattern: string, | |
| cursor?: number, // default: -1 | |
| }) => any; | |
| {{- end }} | |
| } // namespace browser | |
| {{- end }}{{/* end if has browser tools */}} | |
| {{- if $hasPython }} | |
| ## python | |
| Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files). | |
| When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster. | |
| {{- end }}{{/* end if hasPython */}} | |
| {{- end }}{{/* end if has any built-in tools */}} | |
| {{- end }}{{/* end if .Tools */}} | |
| # Valid channels: analysis, commentary, final. Channel must be included for every message.{{ if $hasNonBuiltinTools }} | |
| Calls to these tools must go to the commentary channel: 'functions'. | |
| {{- end -}}<|end|>{{/* end of system */ -}} | |
| {{- if or $hasNonBuiltinTools .System -}} | |
| <|start|>developer<|message|>{{- if $hasNonBuiltinTools }}# Tools | |
| ## functions | |
| namespace functions { | |
| {{- range .Tools }} | |
| {{- if not (or (eq .Function.Name "browser.search") (eq .Function.Name "browser.open") (eq .Function.Name "browser.find") (eq .Function.Name "python")) }} | |
| {{if .Function.Description }} | |
| // {{ .Function.Description }} | |
| {{- end }} | |
| {{- if and .Function.Parameters.Properties (gt (len .Function.Parameters.Properties) 0) }} | |
| type {{ .Function.Name }} = (_: { | |
| {{- range $name, $prop := .Function.Parameters.Properties }} | |
| {{- if $prop.Description }} | |
| // {{ $prop.Description }} | |
| {{- end }} | |
| {{ $name }}: {{ if gt (len $prop.Type) 1 }}{{ range $i, $t := $prop.Type }}{{ if $i }} | {{ end }}{{ $t }}{{ end }}{{ else }}{{ index $prop.Type 0 }}{{ end }}, | |
| {{- end }} | |
| }) => any; | |
| {{- else }} | |
| type {{ .Function.Name }} = () => any; | |
| {{- end }} | |
| {{- end }}{{/* end if not browser tool */}} | |
| {{- end }}{{/* end of range .Tools */}} | |
| } // namespace functions | |
| {{- end }}{{/* end if hasNonBuiltinTools */}} | |
| {{- if .System}} | |
| # Instructions | |
| {{ .System }} | |
| {{- end -}} | |
| <|end|> | |
| {{- end -}} | |
| {{- /* Find the index of the last user message */ -}} | |
| {{- $lastUserIdx := -1 }} | |
| {{- $prefillingContent := false }} | |
| {{- $prefillingThinkingOnly := false }} | |
| {{- range $i, $msg := .Messages }} | |
| {{- $last := eq (len (slice $.Messages $i)) 1 -}} | |
| {{- if eq $msg.Role "user" }} | |
| {{- $lastUserIdx = $i }} | |
| {{- end -}} | |
| {{- if and $last (eq $msg.Role "assistant") (gt (len $msg.Content) 0) }} | |
| {{- $prefillingContent = true }} | |
| {{- else if and $last (eq $msg.Role "assistant") (gt (len $msg.Thinking) 0) }} | |
| {{- $prefillingThinkingOnly = true }} | |
| {{- end }} | |
| {{- end -}} | |
| {{- /* Now render messages */ -}} | |
| {{- range $i, $msg := .Messages }} | |
| {{- $last := eq (len (slice $.Messages $i)) 1 -}} | |
| {{- if (ne $msg.Role "system") -}} | |
| {{- if eq $msg.Role "tool" -}} | |
| {{- if or (eq $msg.ToolName "python") (eq $msg.ToolName "browser.search") (eq $msg.ToolName "browser.open") (eq $msg.ToolName "browser.find") -}} | |
| <|start|>{{ $msg.ToolName }} to=assistant<|message|>{{ $msg.Content }}<|end|> | |
| {{- else -}} | |
| <|start|>functions.{{ $msg.ToolName }} to=assistant<|message|>{{ $msg.Content }}<|end|> | |
| {{- end -}} | |
| {{- else if eq $msg.Role "assistant" -}} | |
| {{- if and $msg.Thinking (gt $i $lastUserIdx) -}}{{- /* Show thinking only after last user message */ -}} | |
| <|start|>assistant<|channel|>analysis<|message|>{{ $msg.Thinking }}{{- if not $prefillingThinkingOnly -}}<|end|>{{- end -}} | |
| {{- end -}} | |
| {{- if gt (len $msg.Content) 0 -}} | |
| <|start|>assistant<|channel|>final<|message|>{{ $msg.Content }}{{- if not $prefillingContent -}}<|end|>{{- end -}} | |
| {{- end -}} | |
| {{- if gt (len $msg.ToolCalls) 0 -}} | |
| {{- range $j, $toolCall := $msg.ToolCalls -}} | |
| {{- $isBuiltin := or (eq $toolCall.Function.Name "python") (eq $toolCall.Function.Name "browser.search") (eq $toolCall.Function.Name "browser.open") (eq $toolCall.Function.Name "browser.find") -}} | |
| <|start|>assistant<|channel|>{{ if $isBuiltin }}analysis{{ else }}commentary{{ end }} to={{ if not $isBuiltin}}functions.{{end}}{{ $toolCall.Function.Name }} <|constrain|>json<|message|>{{ $toolCall.Function.Arguments }}<|call|> | |
| {{- end -}} | |
| {{- end -}} | |
| {{- else if eq $msg.Role "user" -}} | |
| <|start|>{{ $msg.Role }}<|message|>{{ $msg.Content }}<|end|> | |
| {{- end }} | |
| {{- else }} | |
| {{- end }} | |
| {{- end -}} | |
| {{- if not (or $prefillingContent $prefillingThinkingOnly) -}} | |
| <|start|>assistant | |
| {{- end -}}""" | |
| PARAMETER temperature 1.0 | |
| PARAMETER top_k 0 | |
| PARAMETER top_p 1.0 |