Instructions to use ATL-Machine/affine-0KB with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ATL-Machine/affine-0KB with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ATL-Machine/affine-0KB") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ATL-Machine/affine-0KB") model = AutoModelForCausalLM.from_pretrained("ATL-Machine/affine-0KB") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ATL-Machine/affine-0KB with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ATL-Machine/affine-0KB" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ATL-Machine/affine-0KB", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ATL-Machine/affine-0KB
- SGLang
How to use ATL-Machine/affine-0KB with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ATL-Machine/affine-0KB" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ATL-Machine/affine-0KB", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ATL-Machine/affine-0KB" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ATL-Machine/affine-0KB", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ATL-Machine/affine-0KB with Docker Model Runner:
docker model run hf.co/ATL-Machine/affine-0KB
| {%- if tools %} | |
| {{- '<|im_start|>system\n' }} | |
| {%- if messages[0].role == 'system' %} | |
| {{- messages[0].content + '\n\n' }} | |
| {%- endif %} | |
| {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} | |
| {%- for tool in tools %} | |
| {{- "\n" }} | |
| {{- tool | tojson }} | |
| {%- endfor %} | |
| {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} | |
| {%- else %} | |
| {%- if messages[0].role == 'system' %} | |
| {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- for message in messages %} | |
| {%- if message.content is string %} | |
| {%- set content = message.content %} | |
| {%- else %} | |
| {%- set content = '' %} | |
| {%- endif %} | |
| {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} | |
| {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} | |
| {%- elif message.role == "assistant" %} | |
| {{- '<|im_start|>' + message.role + '\n' + content }} | |
| {%- if message.tool_calls %} | |
| {%- for tool_call in message.tool_calls %} | |
| {%- if (loop.first and content) or (not loop.first) %} | |
| {{- '\n' }} | |
| {%- endif %} | |
| {%- if tool_call.function %} | |
| {%- set tool_call = tool_call.function %} | |
| {%- endif %} | |
| {{- '<tool_call>\n{"name": "' }} | |
| {{- tool_call.name }} | |
| {{- '", "arguments": ' }} | |
| {%- if tool_call.arguments is string %} | |
| {{- tool_call.arguments }} | |
| {%- else %} | |
| {{- tool_call.arguments | tojson }} | |
| {%- endif %} | |
| {{- '}\n</tool_call>' }} | |
| {%- endfor %} | |
| {%- endif %} | |
| {{- '<|im_end|>\n' }} | |
| {%- elif message.role == "tool" %} | |
| {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} | |
| {{- '<|im_start|>user' }} | |
| {%- endif %} | |
| {{- '\n<tool_response>\n' }} | |
| {{- content }} | |
| {{- '\n</tool_response>' }} | |
| {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} | |
| {{- '<|im_end|>\n' }} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- if add_generation_prompt %} | |
| {{- '<|im_start|>assistant\n' }} | |
| {%- endif %} |