Spaces:
Runtime error
Runtime error
| from gradio import Interface, Text, Markdown, Dropdown | |
| def compare(prompt, model1="biogpt", model2="gemma_2b_en"): | |
| """ | |
| Fetches outputs from BioGPT and Gemma for a given prompt and model selection. | |
| Args: | |
| prompt: User-defined prompt for text generation. | |
| model1: Name of the first model (default: biogpt). | |
| model2: Name of the second model (default: gemma_2b_en). | |
| Returns: | |
| A list containing BioGPT and Gemma outputs as strings. | |
| """ | |
| # Replace these with actual BioGPT and Gemma inference code | |
| biogpt_output = f"BioGPT Output for '{prompt}'" # Placeholder output | |
| gemma_output = f"Gemma Output for '{prompt}'" # Placeholder output | |
| return [biogpt_output, gemma_output] | |
| # Interface definition | |
| interface = Interface( | |
| fn=compare, | |
| inputs=[ | |
| Text(label="Enter Prompt:", placeholder="Write your prompt here..."), | |
| # Dropdown for model selection (modify options as needed) | |
| Dropdown(choices=["biogpt", "gpt2", "bart"], value="biogpt", label="Model 1"), | |
| Dropdown(choices=["gemma_2b_en", "gemma_1b_en"], value="gemma_2b_en", label="Model 2"), | |
| ], | |
| outputs=Text(label="Outputs"), | |
| description=Markdown(""" | |
| This Gradio app allows you to compare text generation outputs from two different large language models (LLMs) on the same prompt. | |
| * Enter a prompt in the text box. | |
| * Select the desired models from the dropdown menus. BioGPT and Gemma are currently supported. | |
| * Click "Run" to generate outputs from both models. | |
| **Note:** This is a demonstration, actual model inference needs to be implemented. | |
| """), | |
| ) | |
| # Launch the Gradio app | |
| interface.launch(share=True) # Set share=True to host on Hugging Face Hub | |