Instructions to use Jinx-org/Jinx-Qwen3-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Jinx-org/Jinx-Qwen3-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Jinx-org/Jinx-Qwen3-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Jinx-org/Jinx-Qwen3-4B") model = AutoModelForCausalLM.from_pretrained("Jinx-org/Jinx-Qwen3-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Jinx-org/Jinx-Qwen3-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Jinx-org/Jinx-Qwen3-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jinx-org/Jinx-Qwen3-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Jinx-org/Jinx-Qwen3-4B
- SGLang
How to use Jinx-org/Jinx-Qwen3-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Jinx-org/Jinx-Qwen3-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jinx-org/Jinx-Qwen3-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Jinx-org/Jinx-Qwen3-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jinx-org/Jinx-Qwen3-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Jinx-org/Jinx-Qwen3-4B with Docker Model Runner:
docker model run hf.co/Jinx-org/Jinx-Qwen3-4B
You need to read and agree to the Disclaimer and User Agreementa to access this model.
Disclaimer and User Agreement
- Introduction
Thank you for your interest in accessing this model (“the Model”).
Before you access, download, or use the Model or any derivative works, please read and understand this Disclaimer and User Agreement (“Agreement”).
By checking “I have read and agree” and accessing the Model, you acknowledge that you have read, understood, and agreed to all terms of this Agreement.
If you do not agree with any part of this Agreement, do not request or use the Model.
- Nature of the Model & Risk Notice
The Model is trained using large-scale machine learning techniques and may generate inaccurate, false, offensive, violent, sexual, discriminatory, politically sensitive, or otherwise uncontrolled content.
The Model does not guarantee the accuracy, completeness, or legality of any generated content. You must independently evaluate and verify the outputs, and you assume all risks arising from their use.
The Model may reflect biases or errors present in its training data, potentially producing inappropriate or controversial outputs.
- License and Permitted Use
You may use the Model solely for lawful, compliant, and non-malicious purposes in research, learning, experimentation, and development, in accordance with applicable laws and regulations.
You must not use the Model for activities including, but not limited to:
Creating, distributing, or promoting unlawful, violent, pornographic, terrorist, discriminatory, defamatory, or privacy-invasive content;
Any activity that could cause significant negative impact on individuals, groups, organizations, or society;
High-risk applications such as automated decision-making, medical diagnosis, financial transactions, or legal advice without proper validation and human oversight.
You must not remove, alter, or circumvent any safety mechanisms implemented in the Model.
- Data and Privacy
You are solely responsible for any data processed or generated when using the Model, including compliance with data protection and privacy regulations.
The Model’s authors and contributors make no guarantees or warranties regarding data security or privacy.
- Limitation of Liability
To the maximum extent permitted by applicable law, the authors, contributors, and their affiliated institutions shall not be liable for any direct, indirect, incidental, or consequential damages arising from the use of the Model.
You agree to bear full legal responsibility for any disputes, claims, or litigation arising from your use of the Model, and you release the authors and contributors from any related liability.
- Updates and Termination
This Agreement may be updated at any time, with updates posted on the Model’s page and effective immediately upon publication.
If you violate this Agreement, the authors reserve the right to revoke your access to the Model at any time.
I have read and fully understand this Disclaimer and User Agreement, and I accept full responsibility for any consequences arising from my use of the Model.
Log in or Sign Up to review the conditions and access this model content.