Instructions to use nitky/Megac4ai-command-r-plus with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nitky/Megac4ai-command-r-plus with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nitky/Megac4ai-command-r-plus") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("nitky/Megac4ai-command-r-plus") model = AutoModelForCausalLM.from_pretrained("nitky/Megac4ai-command-r-plus") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nitky/Megac4ai-command-r-plus with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nitky/Megac4ai-command-r-plus" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nitky/Megac4ai-command-r-plus", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/nitky/Megac4ai-command-r-plus
- SGLang
How to use nitky/Megac4ai-command-r-plus with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nitky/Megac4ai-command-r-plus" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nitky/Megac4ai-command-r-plus", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nitky/Megac4ai-command-r-plus" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nitky/Megac4ai-command-r-plus", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use nitky/Megac4ai-command-r-plus with Docker Model Runner:
docker model run hf.co/nitky/Megac4ai-command-r-plus
Megac4ai-command-r-plus
🚨 This model is created using the special mergekit that supports c4ai-command-r-plus.
This is a 160b frankenmerge model created by interleaving layers of CohereForAI/c4ai-command-r-plus with itself using mergekit.
Output comparison
Test Case Details
Condition: temperature=0.3
<|START_OF_TURN_TOKEN|><|USER_TOKEN|>ティム: やあ、調子はどう?
キム: いろいろやろうとしてたんだけど、また先延ばしにしちゃったよ。
ティム: 何をしようとしていたの?
キム: 大学の課題だよ。どうにもやる気が出なくてね。
ティム: 集中できないなら、ポモドーロ・テクニックをするといいよ。
キム: 何それ?
ティム: 25分作業して、5分休憩するのを繰り返すんだよ。一回あたりの作業時間が短くて集中できるよ。
キム: うーん、集中っていうわけじゃないんだよね
ティム: じゃあ1日に5分だけでいいから机で課題をするっていうのはどう?
キム: 5分じゃ何もできなくない?
ティム: 短い時間でもいいから机で作業するっていうのがポイントなんだよ。むしろもっとやりたい、くらいで止めておくと毎日続くもっと長い時間できるようになるよ。
キム: 確かにそれならできるかも。ありがとう!
Q: キムは何をやってみようとしていますか?また何故それをやろうとしていますか?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
This task is included in elyza/ELYZA-tasks-100.
Output Example
| Model | Output |
|---|---|
| CohereForAI/c4ai-command-r-plus | キムは大学の課題をやろうとしています。しかし、やる気が起きず、先延ばしにしてしまったようです。 |
| nitky/Megac4ai-command-r-plus | キムは大学の課題をやろうとしています。やる気が出ず、集中して作業することができないため、ティムにアドバイスを求めています。ティムが提案したポモドーロ・テクニックや、1日に5分だけ机で課題をするという方法を試すことで、課題に取り組む習慣を身につけようとしています。 |
Test environment
This model was tested using text-generation-webui. I use preset min_p and Null preset with temperature=0.3 for Generation.
Usage
Please install transformers from the source repository that includes the necessary changes for this model.
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "nitky/megac4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
Quantized model through bitsandbytes, 4-bit precision
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model_id = "nitky/megac4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 20]
model: CohereForAI/c4ai-command-r-plus
- sources:
- layer_range: [11, 31]
model: CohereForAI/c4ai-command-r-plus
- sources:
- layer_range: [22, 42]
model: CohereForAI/c4ai-command-r-plus
- sources:
- layer_range: [33, 53]
model: CohereForAI/c4ai-command-r-plus
- sources:
- layer_range: [44, 64]
model: CohereForAI/c4ai-command-r-plus
- Downloads last month
- 8
Model tree for nitky/Megac4ai-command-r-plus
Base model
CohereLabs/c4ai-command-r-plus
docker model run hf.co/nitky/Megac4ai-command-r-plus