Text Generation
Transformers
Safetensors
English
qwen3
code
ReactJS
conversational
text-generation-inference
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "nirusanan/Qwen3-ReactJs-code" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "nirusanan/Qwen3-ReactJs-code",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Quick Links
Model Information
The Qwen3-ReactJs-code is a quantized, fine-tuned version of the Qwen3-1.7B-Base model designed specifically for generating ReactJs code.
- Base model: Qwen/Qwen3-1.7B-Base
How to use
Starting with transformers version 4.51.0 and later, you can run conversational inference using the Transformers pipeline.
Make sure to update your transformers installation via pip install --upgrade transformers.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
def get_pipline():
model_name = "nirusanan/Qwen3-ReactJs-code"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="cuda:0",
trust_remote_code=True
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500)
return pipe
pipe = get_pipline()
def generate_prompt(project_title, description):
prompt = f"""Below is an instruction that describes a project. Write Reactjs code to accomplish the project described below.
### Instruction:
Project:
{project_title}
Project Description:
{description}
### Response:
"""
return prompt
prompt = generate_prompt(project_title = "Your ReactJs project", description = "Your ReactJs project description")
result = pipe(prompt)
generated_text = result[0]['generated_text']
print(generated_text.split("### End")[0])
- Downloads last month
- 15
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nirusanan/Qwen3-ReactJs-code" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nirusanan/Qwen3-ReactJs-code", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'