How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="jeiku/General_Purpose_3B_GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

mooby

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: linear
models:
  - model: new1+jeiku/Theory_of_Mind_128_StableLM
    parameters:
      weight: 1
  - model: new1+jeiku/Everything_v3_128_StableLM
    parameters:
      weight: 1
  - model: new1+jeiku/Gnosis_StableLM
    parameters:
      weight: 1
dtype: float16
Downloads last month
24
GGUF
Model size
3B params
Architecture
stablelm
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

6-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jeiku/General_Purpose_3B_GGUF

Paper for jeiku/General_Purpose_3B_GGUF