Instructions to use pool-water/script-kiddie with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pool-water/script-kiddie with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pool-water/script-kiddie") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pool-water/script-kiddie") model = AutoModelForCausalLM.from_pretrained("pool-water/script-kiddie") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use pool-water/script-kiddie with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pool-water/script-kiddie" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pool-water/script-kiddie", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/pool-water/script-kiddie
- SGLang
How to use pool-water/script-kiddie with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pool-water/script-kiddie" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pool-water/script-kiddie", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pool-water/script-kiddie" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pool-water/script-kiddie", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use pool-water/script-kiddie with Docker Model Runner:
docker model run hf.co/pool-water/script-kiddie
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("pool-water/script-kiddie")
model = AutoModelForCausalLM.from_pretrained("pool-water/script-kiddie")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))script-kiddie 1.0 Qwen 3 0.6B
Made with love by whatever
What is script-kiddie?
script-kiddie is a model trained on tool-usage, bash-script-writing, python-coding, and kali-linux tools. It is intented to be an educational example of small model that can assist in light pen-testing.
Chat Template
We are using Qwen's format for conversations and function calling. Here's an example:
print(tokenizer.apply_chat_template(ds["train"][7500]["messages"], tokenize=False))
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.Here are the available tools:<tools> [{'type': 'function', 'function': {'name': 'get_sunrise_sunset_time', 'description': 'Get the sunrise and sunset times for a specific location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'date': {'type': 'string', 'description': "The desired date in format 'YYYY-MM-DD'"}}, 'required': ['location', 'date']}}}, {'type': 'function', 'function': {'name': 'calculate_distance', 'description': 'Calculate the distance between two locations', 'parameters': {'type': 'object', 'properties': {'location1': {'type': 'string', 'description': 'The first location'}, 'location2': {'type': 'string', 'description': 'The second location'}}, 'required': ['location1', 'location2']}}}] </tools>Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{tool_call}
</tool_call><|im_end|>
<|im_start|>user
Hi, I am planning a trip to New York City on 2022-12-25. Can you tell me the sunrise and sunset times for that day?<|im_end|>
<|im_start|>assistant
<tool_call>
{'name': 'get_sunrise_sunset_time', 'arguments': {'location': 'New York City', 'date': '2022-12-25'}}
</tool_call><|im_end|>
<|im_start|>user
<tool_response>
<tool_response>
{'sunrise': '07:16 AM', 'sunset': '04:31 PM'}
</tool_response>
</tool_response><|im_end|>
<|im_start|>assistant
<think>
</think>
On December 25, 2022, in New York City, the sun will rise at 07:16 AM and set at 04:31 PM.<|im_end|>
Evaluation
Evaluations are on part with Qwen3:
hf (pretrained=pool-water/script-kiddie,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: 2, batch_size: auto (40)
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|boolq | 2|none | 2|acc |_ |0.6939|_ |0.0081|
|hellaswag| 1|none | 2|acc |_ |0.3961|_ |0.0049|
| | |none | 2|acc_norm|_ |0.4963|_ |0.0050|
|piqa | 1|none | 2|acc |_ |0.6757|_ |0.0109|
| | |none | 2|acc_norm|_ |0.6741|_ |0.0109|
|rte | 1|none | 2|acc |_ |0.6751|_ |0.0282|
Usage
Suggested use is:
- serve with
vllm - use agent
qwen_agent
Example Qwen Agent Usage
agent = Assistant(
llm={
"model": "pool-water/script-kiddie",
"model_server": base_url,
"api_key": "EMPTY",
"generate_cfg": {
"max_tokens": 1000,
"temperature": 0.0,
"top_p": 0.9,
"frequency_penalty": 0.5,
"presence_penalty": 0.0,
"extra_body": {
"chat_template_kwargs": {
"enable_thinking": False,
},
},
},
},
function_list=["nmap", "gobuster"],
)
stream = agent.run(
[
{
"role": "user",
"content": query,
},
],
)
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: @whatever
- Model type: text-generation
- Language(s) (NLP): en
- License: ???
- Finetuned from model [optional]: Qwen/Qwen3-0.6B
Uses
This software is provided strictly for educational and research purposes only. It is intended to help users learn, experiment, and study relevant concepts. The authors and contributors do not endorse or condone any misuse of this software. Use of this software for malicious, unlawful, or unauthorized activities is strictly prohibited, and users assume full responsibility for compliance with all applicable laws and regulations.
Training Hyperparameters
- Training regime: fp32
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Environmental Impact
- Hardware Type: A100
- Hours used: 0.75 hours
- Cloud Provider: [RunPod
- Compute Region: KS-2
- Carbon Emitted: ~0.08 kg
Compute Infrastructure
- Trained for 45 minutes on a single A100 on RunPod
Hardware
A100
Software
HuggingFace SFT
- Downloads last month
- 16
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pool-water/script-kiddie") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)