Instructions to use agentica-org/DeepScaleR-1.5B-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use agentica-org/DeepScaleR-1.5B-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="agentica-org/DeepScaleR-1.5B-Preview")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("agentica-org/DeepScaleR-1.5B-Preview") model = AutoModelForCausalLM.from_pretrained("agentica-org/DeepScaleR-1.5B-Preview") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use agentica-org/DeepScaleR-1.5B-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "agentica-org/DeepScaleR-1.5B-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "agentica-org/DeepScaleR-1.5B-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/agentica-org/DeepScaleR-1.5B-Preview
- SGLang
How to use agentica-org/DeepScaleR-1.5B-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "agentica-org/DeepScaleR-1.5B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "agentica-org/DeepScaleR-1.5B-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "agentica-org/DeepScaleR-1.5B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "agentica-org/DeepScaleR-1.5B-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use agentica-org/DeepScaleR-1.5B-Preview with Docker Model Runner:
docker model run hf.co/agentica-org/DeepScaleR-1.5B-Preview
I have difficulty to trigger thinking process
same.
Some issue with the config and conversion into GGUF?
The thinking should be wrapped in and , let me know if you are able to find these tokens in the LLM generated outputs!
The thinking should be wrapped in and , let me know if you are able to find these tokens in the LLM generated outputs!
sometimes it would just output token without thinking.
maybe it can use more training?
The thinking should be wrapped in and , let me know if you are able to find these tokens in the LLM generated outputs!
sometimes it would just output token without thinking.
maybe it can use more training?
On larger models, I usually always find a way, using one of those:
- "Match your effort to the task. And when it gets tough, take as long as you need to think before you start answering."
or - Something like "The goal is to find []" or "The goal is to figure out []"
@michaelzhiluo Thanks for your help, but there have been a formatting issue with your reply, can you share again? :)