Q4_K_M vestion of https://huggingface.co/t-tech/T-lite-it-1.0
Llama-cpp-python code:
from llama_cpp import Llama
from huggingface_hub import snapshot_download
# load model
snapshot_download(repo_id="ichrnkv/t_lite_1.0_gguf", local_dir="./")
# llama cpp model
model = Llama(
model_path="./model.gguf",
verbose=True,
n_gpu_layers=-1,
seed=42
)
- Downloads last month
- 4
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ichrnkv/t_lite_1.0_gguf
Base model
t-tech/T-lite-it-1.0