Text Generation
GGUF
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="DevQuasar/llama3_8b_chat_brainstorm_plus-GGUF",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Brainstorm Plus GGUF

image/png

Quantized GGUF model files for DevQuasar/llama3_8b_chat_brainstorm_plus

Brainstorm facilitates idea exploration through interaction with a Language Model (LLM). Rather than providing direct answers, the model engages in a dialogue with users, offering probing questions aimed at fostering deeper contemplation and consideration of various facets of their ideas.

Usage

LMStudio

With https://lmstudio.ai/ use the brainstorm.preset.json preset

llama.cpp

./main -m llama3_8b_chat_brainstorm_plus.Q2_K.gguf -p "### HUMAN: I'd like to build an online marketplace\n ### ASSISTANT:" -n 400 -e -ins -r "### HUMAN:\n" -r "### ASSISTANT: "

I'm doing this to 'Make knowledge free for everyone', using my personal time and resources.

If you want to support my efforts please visit my ko-fi page: https://ko-fi.com/devquasar

Also feel free to visit my website https://devquasar.com/

Downloads last month
84
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train DevQuasar/llama3_8b_chat_brainstorm_plus-GGUF

Collections including DevQuasar/llama3_8b_chat_brainstorm_plus-GGUF