--- base_model: - Qwen/Qwen2.5-Coder-7B-Instruct datasets: - luzimu/WebGen-Bench language: - en library_name: transformers license: mit metrics: - accuracy pipeline_tag: text-generation tags: - code-generation --- # WebGen-LM WebGen-LM is a code language model specifically trained for generating interactive and functional websites from scratch. It is trained using the Bolt.diy trajectories generated from a subset of the training set of WebGen-Bench (🤗 [luzimu/WebGen-Bench](https://huggingface.co/datasets/luzimu/WebGen-Bench)). It has been introduced in the paper [WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch](https://arxiv.org/abs/2505.03733). The training data and code can be found at [WebGen-Bench (Github)](https://github.com/mnluzimu/WebGen-Bench). The WebGen-LM family of models are as follows: |Models | HF Links | |---|---| |WebGen-LM-7B | 🤗 [luzimu/WebGen-LM-7B](https://huggingface.co/luzimu/WebGen-LM-7B) | |WebGen-LM-14B | 🤗 [luzimu/WebGen-LM-14B](https://huggingface.co/luzimu/WebGen-LM-14B) | |WebGen-LM-32B | 🤗 [luzimu/WebGen-LM-32B](https://huggingface.co/luzimu/WebGen-LM-32B) | ## Performance on WebGen-Bench ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b0bfef2f2f9c345b87e673/ADt1JdvKw-IZ_xnS17adL.png) ## Usage You can use `WebGen-LM` with the `transformers` library to generate website code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "luzimu/WebGen-LM-32B" # You can also use WebGen-LM-7B or WebGen-LM-14B tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" ) # Example for website generation prompt = """Generate the complete HTML, CSS, and JavaScript code for a responsive website. The website should be a simple landing page for a coffee shop. It needs: 1. A navigation bar at the top with "Home", "Menu", "About Us", and "Contact" links. 2. A hero section with a background image, a title "Brewing Perfection", and a call-to-action button "View Our Menu". 3. A menu section displaying at least 3 coffee items with their names and prices. 4. An "About Us" section with a brief description of the coffee shop. 5. A "Contact" section with an address, phone number, and a simple contact form (Name, Email, Message, Submit button). 6. Basic responsive design for mobile views. """ messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=2048, # Adjust as needed for full website code do_sample=True, temperature=0.7, top_p=0.9, repetition_penalty=1.05, ) # Decode the generated output, skipping special tokens response = tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0] # The response will contain the full conversation history including the input prompt. # To get only the newly generated text, you might need to slice it or use the appropriate # tokenizer behavior based on how apply_chat_template adds prompt. # For simplicity, if the model just appends to the prompt, direct decode might suffice. # A more robust approach might be: # generated_text_only = tokenizer.decode(generated_ids[0][len(model_inputs.input_ids[0]):], skip_special_tokens=True) print(response) # You might need to parse the output to separate HTML, CSS, and JS if the model outputs a combined file. # For example, look for specific markers like ,