Instructions to use rabbitcat/BashCopilot-6B-preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use rabbitcat/BashCopilot-6B-preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="rabbitcat/BashCopilot-6B-preview")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rabbitcat/BashCopilot-6B-preview") model = AutoModelForCausalLM.from_pretrained("rabbitcat/BashCopilot-6B-preview") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use rabbitcat/BashCopilot-6B-preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "rabbitcat/BashCopilot-6B-preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rabbitcat/BashCopilot-6B-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/rabbitcat/BashCopilot-6B-preview
- SGLang
How to use rabbitcat/BashCopilot-6B-preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "rabbitcat/BashCopilot-6B-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rabbitcat/BashCopilot-6B-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "rabbitcat/BashCopilot-6B-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rabbitcat/BashCopilot-6B-preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use rabbitcat/BashCopilot-6B-preview with Docker Model Runner:
docker model run hf.co/rabbitcat/BashCopilot-6B-preview
Bash Copilot is an LLM to generate bash command line or script based on your input of natural language description (NL2Bash). This model supports multiturn conversation, which means you can (1) implicitly refer to an formerly mentioned entity using pronoun or omitting it, (2) optimize or complete the target code step by step.
Bash Copilot is a fine-tuned version of Yi-1.5-6B with 21k multiturn conversations of both Chinese and English that converts natural language to bash code. The source corpus is collected from github repos of more than 500 stars. The dataset will be released with next release of BashCopilot-6B (non-preview version).
Here are some examples:
Example 1:
- user: compare id field of a.json and b.json
- assistant:
diff <(jq -r '.id' a.json) <(jq -r '.id' b.json) - user: print the bigger one
- assistant:
echo $(( $(jq -r '.id' a.json) > $(jq -r '.id' b.json)? $(jq -r '.id' a.json) : $(jq -r '.id' b.json) ))
Example 2:
- user: count the file under /path/to/target
- assistant:
find /path/to/target -type f | wc -l - user: write this to count.sh
- assistant:
echo "find /path/to/target -type f | wc -l" > count.sh - user: repeatedly run it until it returns 10, then break the loop
- assistant:
while [ $($count.sh) -ne 10 ]; do sleep 1; done - user: instead of sleep, send "not ready" to 1.2.3.4:8888 via TCP
- assistant:
while [ $($count.sh) -ne 10 ]; do echo not ready > /dev/tcp/1.2.3.4/8888; sleep 1; done
Example 3:
- user: 文件夹/path/to/target下的文件个数
- assistant:
ls /path/to/target | wc -l - user: 写入count.sh
- assistant:
echo '#!/bin/bash
count=`ls | wc -l`
echo $count' > count.sh
- user: 循环运行它,直到结为10退出循环
- assistant:
while [ 1 ]
do
./count.sh
if [ $? -eq 10 ];
then
break;
fi
done
- Downloads last month
- 7
docker model run hf.co/rabbitcat/BashCopilot-6B-preview