Instructions to use SkunkworksAI/BakLLaVA-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SkunkworksAI/BakLLaVA-1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SkunkworksAI/BakLLaVA-1")# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/BakLLaVA-1", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SkunkworksAI/BakLLaVA-1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SkunkworksAI/BakLLaVA-1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SkunkworksAI/BakLLaVA-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/SkunkworksAI/BakLLaVA-1
- SGLang
How to use SkunkworksAI/BakLLaVA-1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SkunkworksAI/BakLLaVA-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SkunkworksAI/BakLLaVA-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SkunkworksAI/BakLLaVA-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SkunkworksAI/BakLLaVA-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use SkunkworksAI/BakLLaVA-1 with Docker Model Runner:
docker model run hf.co/SkunkworksAI/BakLLaVA-1
Commit History
Update README.md fc32cd4
Update README.md 96b0efe
Update README.md ffd6a1f
Update README.md 24c48e2
Update README.md a31585c
Update README.md baab364
Delete latest 63aa67f
Delete zero_to_fp32.py 61159e9
Delete training_args.bin 929e2fa
Delete rng_state_3.pth eb2fb03
Delete rng_state_2.pth 0a23529
Delete scheduler.pt fd0a56a
Delete rng_state_1.pth 410845b
Delete rng_state_0.pth e16d1c7
Delete checkpoint-85000 2da19dc
Delete checkpoint-80000 db4f567
Delete checkpoint-65000 0554148
Delete checkpoint-60000 b237365
Delete checkpoint-35000 b667baf
Delete checkpoint-30000 4a64f45
Delete checkpoint-115000 e4be0e5
Delete checkpoint-110000 15f6532
Update README.md 625e23c
Update README.md 30e0bf4
Create README.md a929bf9
a 30993fc
“pharaouk” commited on
a ce92a7e
“pharaouk” commited on
a 438cafc
“pharaouk” commited on
a 1856b79
“pharaouk” commited on
a 81a3292
“pharaouk” commited on
initial commit e363a09
Farouk commited on