|
|
---
|
|
|
library_name: transformers
|
|
|
tags:
|
|
|
- code
|
|
|
- NextJS
|
|
|
language:
|
|
|
- zho
|
|
|
- eng
|
|
|
- fra
|
|
|
- spa
|
|
|
- por
|
|
|
- deu
|
|
|
- ita
|
|
|
- rus
|
|
|
- jpn
|
|
|
- kor
|
|
|
- vie
|
|
|
- tha
|
|
|
- ara
|
|
|
base_model:
|
|
|
- Qwen/Qwen2.5-1.5B-Instruct
|
|
|
base_model_relation: finetune
|
|
|
pipeline_tag: text-generation
|
|
|
---
|
|
|
|
|
|
# Model Information
|
|
|
The Qwen2.5-1.5B-NextJs-code is a quantized, fine-tuned version of the Qwen2.5-1.5B-Instruct model designed specifically for generating NextJs code.
|
|
|
|
|
|
- **Base model:** Qwen/Qwen2.5-1.5B-Instruct
|
|
|
|
|
|
|
|
|
# How to use
|
|
|
Starting with transformers version 4.44.0 and later, you can run conversational inference using the Transformers pipeline.
|
|
|
|
|
|
Make sure to update your transformers installation via pip install --upgrade transformers.
|
|
|
|
|
|
```python
|
|
|
import torch
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
|
|
```
|
|
|
|
|
|
```python
|
|
|
def get_pipline():
|
|
|
model_name = "nirusanan/Qwen2.5-1.5B-NextJs-code"
|
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
|
|
tokenizer.pad_token = tokenizer.eos_token
|
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
|
model_name,
|
|
|
torch_dtype=torch.float16,
|
|
|
device_map="cuda:0",
|
|
|
trust_remote_code=True
|
|
|
)
|
|
|
|
|
|
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500)
|
|
|
|
|
|
return pipe
|
|
|
|
|
|
pipe = get_pipline()
|
|
|
```
|
|
|
|
|
|
```python
|
|
|
def generate_prompt(project_title, description):
|
|
|
prompt = f"""Below is an instruction that describes a project. Write Nextjs 14 code to accomplish the project described below.
|
|
|
|
|
|
### Instruction:
|
|
|
Project:
|
|
|
{project_title}
|
|
|
|
|
|
Project Description:
|
|
|
{description}
|
|
|
|
|
|
### Response:
|
|
|
"""
|
|
|
return prompt
|
|
|
```
|
|
|
|
|
|
|
|
|
```python
|
|
|
prompt = generate_prompt(project_title = "Your NextJs project", description = "Your NextJs project description")
|
|
|
result = pipe(prompt)
|
|
|
generated_text = result[0]['generated_text']
|
|
|
print(generated_text.split("### End")[0])
|
|
|
``` |