Text Generation
Transformers
Safetensors
English
qwen2
code-generation
conversational
text-generation-inference
Instructions to use luzimu/WebGen-LM-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use luzimu/WebGen-LM-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="luzimu/WebGen-LM-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("luzimu/WebGen-LM-7B") model = AutoModelForCausalLM.from_pretrained("luzimu/WebGen-LM-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use luzimu/WebGen-LM-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "luzimu/WebGen-LM-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/luzimu/WebGen-LM-7B
- SGLang
How to use luzimu/WebGen-LM-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "luzimu/WebGen-LM-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "luzimu/WebGen-LM-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use luzimu/WebGen-LM-7B with Docker Model Runner:
docker model run hf.co/luzimu/WebGen-LM-7B
Improve model card: Add code-generation tag and sample usage
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -5,11 +5,13 @@ datasets:
|
|
| 5 |
- luzimu/WebGen-Bench
|
| 6 |
language:
|
| 7 |
- en
|
|
|
|
| 8 |
license: mit
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
pipeline_tag: text-generation
|
| 12 |
-
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
# WebGen-LM
|
|
@@ -30,6 +32,40 @@ The WebGen-LM family of models are as follows:
|
|
| 30 |
|
| 31 |

|
| 32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
## Citation
|
| 35 |
|
|
|
|
| 5 |
- luzimu/WebGen-Bench
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
+
library_name: transformers
|
| 9 |
license: mit
|
| 10 |
metrics:
|
| 11 |
- accuracy
|
| 12 |
pipeline_tag: text-generation
|
| 13 |
+
tags:
|
| 14 |
+
- code-generation
|
| 15 |
---
|
| 16 |
|
| 17 |
# WebGen-LM
|
|
|
|
| 32 |
|
| 33 |

|
| 34 |
|
| 35 |
+
## Sample Usage
|
| 36 |
+
|
| 37 |
+
You can use this model with the Hugging Face `transformers` library.
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
| 41 |
+
|
| 42 |
+
model_id = "luzimu/WebGen-LM-7B" # This model card refers to WebGen-LM-7B
|
| 43 |
+
|
| 44 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 45 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
| 46 |
+
|
| 47 |
+
# Example for website generation
|
| 48 |
+
user_prompt = "Generate a simple HTML page with a heading 'Hello, World!' and a paragraph of lorem ipsum text."
|
| 49 |
+
messages = [
|
| 50 |
+
{"role": "user", "content": user_prompt}
|
| 51 |
+
]
|
| 52 |
+
|
| 53 |
+
# Apply chat template for instruction-following format
|
| 54 |
+
text_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 55 |
+
|
| 56 |
+
# Generate output
|
| 57 |
+
model_inputs = tokenizer(text_input, return_tensors="pt").to(model.device)
|
| 58 |
+
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
|
| 59 |
+
|
| 60 |
+
# Decode and print the generated code
|
| 61 |
+
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
| 62 |
+
print(generated_text)
|
| 63 |
+
|
| 64 |
+
# Example using Hugging Face pipeline for simpler inference
|
| 65 |
+
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
| 66 |
+
result = generator(user_prompt, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
|
| 67 |
+
print(result[0]['generated_text'])
|
| 68 |
+
```
|
| 69 |
|
| 70 |
## Citation
|
| 71 |
|