GGUF
Merge
mergekit
Nexusflow/Starling-LM-7B-beta
FuseAI/FuseChat-7B-VaRM
TensorBlock
GGUF
Eval Results (legacy)
conversational
Instructions to use tensorblock/L-MChat-7b-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use tensorblock/L-MChat-7b-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="tensorblock/L-MChat-7b-GGUF", filename="L-MChat-7b-Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use tensorblock/L-MChat-7b-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tensorblock/L-MChat-7b-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf tensorblock/L-MChat-7b-GGUF:Q2_K
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf tensorblock/L-MChat-7b-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf tensorblock/L-MChat-7b-GGUF:Q2_K
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf tensorblock/L-MChat-7b-GGUF:Q2_K # Run inference directly in the terminal: ./llama-cli -hf tensorblock/L-MChat-7b-GGUF:Q2_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf tensorblock/L-MChat-7b-GGUF:Q2_K # Run inference directly in the terminal: ./build/bin/llama-cli -hf tensorblock/L-MChat-7b-GGUF:Q2_K
Use Docker
docker model run hf.co/tensorblock/L-MChat-7b-GGUF:Q2_K
- LM Studio
- Jan
- Ollama
How to use tensorblock/L-MChat-7b-GGUF with Ollama:
ollama run hf.co/tensorblock/L-MChat-7b-GGUF:Q2_K
- Unsloth Studio new
How to use tensorblock/L-MChat-7b-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tensorblock/L-MChat-7b-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tensorblock/L-MChat-7b-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for tensorblock/L-MChat-7b-GGUF to start chatting
- Docker Model Runner
How to use tensorblock/L-MChat-7b-GGUF with Docker Model Runner:
docker model run hf.co/tensorblock/L-MChat-7b-GGUF:Q2_K
- Lemonade
How to use tensorblock/L-MChat-7b-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull tensorblock/L-MChat-7b-GGUF:Q2_K
Run and chat with the model
lemonade run user.L-MChat-7b-GGUF-Q2_K
List all available models
lemonade list
Remove .gguf files (keep Q2_K.gguf)
Browse files- L-MChat-7b-Q3_K_L.gguf +0 -3
- L-MChat-7b-Q3_K_M.gguf +0 -3
- L-MChat-7b-Q3_K_S.gguf +0 -3
- L-MChat-7b-Q4_0.gguf +0 -3
- L-MChat-7b-Q4_K_M.gguf +0 -3
- L-MChat-7b-Q4_K_S.gguf +0 -3
- L-MChat-7b-Q5_0.gguf +0 -3
- L-MChat-7b-Q5_K_M.gguf +0 -3
- L-MChat-7b-Q5_K_S.gguf +0 -3
- L-MChat-7b-Q6_K.gguf +0 -3
- L-MChat-7b-Q8_0.gguf +0 -3
L-MChat-7b-Q3_K_L.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:33cf2e161d7b0c6797ba673fb2c2e3fb319be14f382be4dcfd5c24e9c9b6edd5
|
| 3 |
-
size 3822035904
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q3_K_M.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:606a63b795d4a07f5ea7060f5256e4882f535f0dccda9d550131dc7fc3921e4c
|
| 3 |
-
size 3518997440
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q3_K_S.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4c594aad59c274e591e2fb652aa8344ebe268addd40562cc7abcaa636ebc9da1
|
| 3 |
-
size 3164578752
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q4_0.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:18c12fb521ad36ce5914f475f04f5c551ca44953642b48bb6d7fdbc53652635b
|
| 3 |
-
size 4108929024
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q4_K_M.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:37a2d50b4e3922b5ba19a8db6e80565ee086d7efd7e100e59d30ce2afd4c026b
|
| 3 |
-
size 4368451584
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q4_K_S.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:9a91dc7d95885e5aa8254b53b13d64ed4eab2996c9132c573dbafd36c05536aa
|
| 3 |
-
size 4140386304
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q5_0.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:15df60537123044929e93a99dd0e0cf563006481e060c07c7b778a29d0f7e5be
|
| 3 |
-
size 4997729280
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q5_K_M.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d2b4cf2e6fd7fbfc2a9e29f91ab8ab33e40a5c231288ce90f8c186e939cef347
|
| 3 |
-
size 5131422720
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q5_K_S.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ca7cbf62f75c0eb020f54704bd69b06f9786b6979b5bf7ac677f28350328a7b8
|
| 3 |
-
size 4997729280
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q6_K.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6722525a608b7d1ff61e90bdbc6db42259b024bc3ac5f1bc26abd2b93f0f22c7
|
| 3 |
-
size 5942079552
|
|
|
|
|
|
|
|
|
|
|
|
L-MChat-7b-Q8_0.gguf
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3e65cfa3971e4e83a27571a9d361ecd5727640a778c65e71ca7a43cfed265c04
|
| 3 |
-
size 7695876032
|
|
|
|
|
|
|
|
|
|
|
|