How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Atotti/RakutenAI-2.0-mini-instruct-gguf",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Atotti/RakutenAI-2.0-mini-instruct-gguf

本リポジトリは、Rakuten/RakutenAI-2.0-mini-instruct をベースに、llama.cpp や text-generation-webui 等のツールで動作するように GGUF 形式に変換したモデルを提供します。
モデルや性能に関する詳細は元モデルの README を参照してください。

Downloads last month
47
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Atotti/RakutenAI-2.0-mini-instruct-gguf

Quantized
(4)
this model