|
|
---
|
|
|
library_name: llama.cpp
|
|
|
---
|
|
|
|
|
|
# RickLLM
|
|
|
|
|
|
This is a GGUF format model uploaded using llama.cpp.
|
|
|
|
|
|
## Model Details
|
|
|
|
|
|
- **Model Format:** GGUF (GPU/CPU inference using llama.cpp)
|
|
|
- **Base Model:** Unsloth
|
|
|
- **Quantization:** Q8_0
|
|
|
- **Use Case:** This model can be used with llama.cpp for efficient inference on both GPU and CPU.
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
This model can be used with llama.cpp. Example usage:
|
|
|
|
|
|
```bash
|
|
|
./main -m RickLLM.gguf -n 1024
|
|
|
```
|
|
|
|
|
|
## License
|
|
|
Please refer to the original model's license for terms of use.
|
|
|
|