Instructions to use AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning", dtype="auto") - Notebooks
- Google Colab
- Kaggle
File size: 133 Bytes
db63264 | 1 2 3 4 | version https://git-lfs.github.com/spec/v1
oid sha256:93d5fd6d2f8cf1172ac86cf982e2b88fa6732366b44dc1a32349379a54a6a044
size 11423346
|