Instructions to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF", filename="WizardCoder-Python-13B-LoRa-Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K # Run inference directly in the terminal: ./llama-cli -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K # Run inference directly in the terminal: ./build/bin/llama-cli -hf tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
Use Docker
docker model run hf.co/tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
- LM Studio
- Jan
- Ollama
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with Ollama:
ollama run hf.co/tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
- Unsloth Studio new
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF to start chatting
- Docker Model Runner
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with Docker Model Runner:
docker model run hf.co/tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
- Lemonade
How to use tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull tensorblock/yeontaek_WizardCoder-Python-13B-LoRa-GGUF:Q2_K
Run and chat with the model
lemonade run user.yeontaek_WizardCoder-Python-13B-LoRa-GGUF-Q2_K
List all available models
lemonade list
Update README.md
Browse files
README.md
CHANGED
|
@@ -24,44 +24,70 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 24 |
|
| 25 |
## Our projects
|
| 26 |
<table border="1" cellspacing="0" cellpadding="10">
|
| 27 |
-
<tr>
|
| 28 |
-
<th style="font-size: 25px;">Awesome MCP Servers</th>
|
| 29 |
-
<th style="font-size: 25px;">TensorBlock Studio</th>
|
| 30 |
-
</tr>
|
| 31 |
<tr>
|
| 32 |
-
<th
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
</tr>
|
| 35 |
<tr>
|
| 36 |
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
|
| 37 |
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
|
| 38 |
</tr>
|
| 39 |
-
<tr>
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
</tr>
|
| 65 |
</table>
|
| 66 |
|
| 67 |
## Prompt template
|
|
|
|
| 24 |
|
| 25 |
## Our projects
|
| 26 |
<table border="1" cellspacing="0" cellpadding="10">
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
<tr>
|
| 28 |
+
<th colspan="2" style="font-size: 25px;">Forge</th>
|
| 29 |
+
</tr>
|
| 30 |
+
<tr>
|
| 31 |
+
<th colspan="2">
|
| 32 |
+
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
|
| 33 |
+
</th>
|
| 34 |
+
</tr>
|
| 35 |
+
<tr>
|
| 36 |
+
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
|
| 37 |
+
</tr>
|
| 38 |
+
<tr>
|
| 39 |
+
<th colspan="2">
|
| 40 |
+
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
|
| 41 |
+
display: inline-block;
|
| 42 |
+
padding: 8px 16px;
|
| 43 |
+
background-color: #FF7F50;
|
| 44 |
+
color: white;
|
| 45 |
+
text-decoration: none;
|
| 46 |
+
border-radius: 6px;
|
| 47 |
+
font-weight: bold;
|
| 48 |
+
font-family: sans-serif;
|
| 49 |
+
">π Try it now! π</a>
|
| 50 |
+
</th>
|
| 51 |
+
</tr>
|
| 52 |
+
|
| 53 |
+
<tr>
|
| 54 |
+
<th style="font-size: 25px;">Awesome MCP Servers</th>
|
| 55 |
+
<th style="font-size: 25px;">TensorBlock Studio</th>
|
| 56 |
+
</tr>
|
| 57 |
+
<tr>
|
| 58 |
+
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
|
| 59 |
+
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
|
| 60 |
</tr>
|
| 61 |
<tr>
|
| 62 |
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
|
| 63 |
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
|
| 64 |
</tr>
|
| 65 |
+
<tr>
|
| 66 |
+
<th>
|
| 67 |
+
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
|
| 68 |
+
display: inline-block;
|
| 69 |
+
padding: 8px 16px;
|
| 70 |
+
background-color: #FF7F50;
|
| 71 |
+
color: white;
|
| 72 |
+
text-decoration: none;
|
| 73 |
+
border-radius: 6px;
|
| 74 |
+
font-weight: bold;
|
| 75 |
+
font-family: sans-serif;
|
| 76 |
+
">π See what we built π</a>
|
| 77 |
+
</th>
|
| 78 |
+
<th>
|
| 79 |
+
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
|
| 80 |
+
display: inline-block;
|
| 81 |
+
padding: 8px 16px;
|
| 82 |
+
background-color: #FF7F50;
|
| 83 |
+
color: white;
|
| 84 |
+
text-decoration: none;
|
| 85 |
+
border-radius: 6px;
|
| 86 |
+
font-weight: bold;
|
| 87 |
+
font-family: sans-serif;
|
| 88 |
+
">π See what we built π</a>
|
| 89 |
+
</th>
|
| 90 |
+
</tr>
|
| 91 |
</table>
|
| 92 |
|
| 93 |
## Prompt template
|