Instructions to use ubitech-edg/commandr-35b-sft with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ubitech-edg/commandr-35b-sft with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ubitech-edg/commandr-35b-sft") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ubitech-edg/commandr-35b-sft") model = AutoModelForCausalLM.from_pretrained("ubitech-edg/commandr-35b-sft") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ubitech-edg/commandr-35b-sft with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ubitech-edg/commandr-35b-sft" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ubitech-edg/commandr-35b-sft", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ubitech-edg/commandr-35b-sft
- SGLang
How to use ubitech-edg/commandr-35b-sft with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ubitech-edg/commandr-35b-sft" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ubitech-edg/commandr-35b-sft", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ubitech-edg/commandr-35b-sft" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ubitech-edg/commandr-35b-sft", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ubitech-edg/commandr-35b-sft with Docker Model Runner:
docker model run hf.co/ubitech-edg/commandr-35b-sft
Command-R 35B โ SFT (Supervised Fine-Tuning on Synthetic QA)
Model type: Causal Language Model
Base model: CohereLabs/c4ai-command-r-v01
License: Apache 2.0
Framework: Axolotl
Overview
commandr-35b-sft is a supervised fine-tuned variant of Cohereโs Command-R 35B model.
Fine-tuning was performed on a high-quality instruction-following dataset using LoRA adapters, enabling improved conversational reasoning and question answering.
Training was conducted on the Leonardo EuroHPC system.
Training Setup
Objective: Supervised fine-tuning (instruction following)
Adapter type: LoRA
Precision: bfloat16
Hardware: 8 nodes ร 2 ร NVIDIA A100 64GB GPUs
Framework: DeepSpeed ZeRO-1, Axolotl, PyTorch 2.5.1+cu121
Runtime: ~6 hours
Dataset split: 70% train / 30% validation
Dataset
Name: axolotl_deduplicated_synthetic_qa.jsonl
Type: Instruction-following synthetic QA dataset
Each sample follows a QA/chat format used in the alpaca_chat.load_qa schema.
Hyperparameters
| Parameter | Value |
|---|---|
| Sequence length | 2048 |
| Micro batch size | 1 |
| Gradient accumulation | 2 |
| Epochs | 1 |
| Learning rate | 0.0001 |
| LR scheduler | cosine |
| Optimizer | AdamW (8-bit) |
| Warmup steps | 20 |
| Weight decay | 0.0 |
| LoRA rank (r) | 16 |
| LoRA alpha | 32 |
| LoRA dropout | 0.05 |
| LoRA target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Gradient checkpointing | โ |
| Flash attention | โ |
| Auto resume | โ |
| Loss watchdog threshold | 8.0 |
| Loss watchdog patience | 20 |
Tokenizer
Tokenizer type: AutoTokenizer
Special token: <|end_of_text|> as pad_token
- Downloads last month
- 1
Model tree for ubitech-edg/commandr-35b-sft
Base model
CohereLabs/c4ai-command-r-v01