Neo111x's picture
Update app.py
5e785a2 verified
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import gradio as gr
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"Neo111x/Falcon3-3B-Instruct-RL-CODE-RL",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"Neo111x/Falcon3-3B-Instruct-RL-CODE-RL",
trust_remote_code=True
)
model.eval()
# Inference function
def repair_code(faulty_code):
PROGRAM_REPAIR_TEMPLATE = f"""
You are an expert in the field of software testing.
You are given a buggy Python program, you are supposed to first generate testcases that can expose the bug,
and then generate the corresponding fixed code. The two tasks are detailed as follows.
1. **Generate a comprehensive set of test cases to expose the bug**:
- Each test case should include an input and the expected output.
- Output the test cases as a JSON list, where each entry is a dictionary with keys "test_input" and "test_output".
- Write in ```json ``` block.
2. **Provide a fixed version**:
- Write a correct Python program to fix the bug.
- Write in ```python ``` block.
- The code should read from standard input and write to standard output, matching the input/output format specified in the problem.
Here is an example.
The faulty Python program is:
```python
\"\"\"Please write a Python program to sum two integer inputs\"\"\"
def add (x, y):
return x - y
x = int(input())
y = int(input())
print(add(x,y))
```
Testcases that can expose the bug:
```json
[
{{
\"test_input\":\"1\n2\",
\"test_output\":\"3\"
}},
{{
\"test_input\":\"-1\n1\",
\"test_output\":\"0\"
}},
{{
\"test_input\":\"-1\n2\",
\"test_output\":\"1\"
}}
]
```
Fixed code:
```python
def add (x, y):
return x + y
x = int(input())
y = int(input())
print(add(x,y))
```
Now, you are given a faulty Python function, please return:
1. **Testcases** that helps expose the bug.
2. **Fixed code** that can pass all testcases.
The faulty function is:
```python
{faulty_code}
```
<|assistant|>
"""
messages = [
{
"role": "user",
"content": PROGRAM_REPAIR_TEMPLATE
}
]
text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=False
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]
result = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
return result.strip()
# Gradio UI
gr.Interface(
fn=repair_code,
inputs=gr.Textbox(label="Faulty Python Function", lines=15, placeholder="Paste your buggy function here..."),
outputs=gr.Textbox(label="Generated Test Cases and Fixed Code", lines=30),
title="🧠 AI Program Repair - Falcon3 3B GRPO",
description="Paste a buggy Python function. The model will generate test cases and a fixed version of the code."
).launch()