Spaces:
Runtime error
Runtime error
File size: 1,677 Bytes
4534fea 292705c 8c2e1b4 6eaa99a 8c2e1b4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
from gradio import Interface, Text, Markdown, Dropdown
def compare(prompt, model1="biogpt", model2="gemma_2b_en"):
"""
Fetches outputs from BioGPT and Gemma for a given prompt and model selection.
Args:
prompt: User-defined prompt for text generation.
model1: Name of the first model (default: biogpt).
model2: Name of the second model (default: gemma_2b_en).
Returns:
A list containing BioGPT and Gemma outputs as strings.
"""
# Replace these with actual BioGPT and Gemma inference code
biogpt_output = f"BioGPT Output for '{prompt}'" # Placeholder output
gemma_output = f"Gemma Output for '{prompt}'" # Placeholder output
return [biogpt_output, gemma_output]
# Interface definition
interface = Interface(
fn=compare,
inputs=[
Text(label="Enter Prompt:", placeholder="Write your prompt here..."),
# Dropdown for model selection (modify options as needed)
Dropdown(choices=["biogpt", "gpt2", "bart"], value="biogpt", label="Model 1"),
Dropdown(choices=["gemma_2b_en", "gemma_1b_en"], value="gemma_2b_en", label="Model 2"),
],
outputs=Text(label="Outputs"),
description=Markdown("""
This Gradio app allows you to compare text generation outputs from two different large language models (LLMs) on the same prompt.
* Enter a prompt in the text box.
* Select the desired models from the dropdown menus. BioGPT and Gemma are currently supported.
* Click "Run" to generate outputs from both models.
**Note:** This is a demonstration, actual model inference needs to be implemented.
"""),
)
# Launch the Gradio app
interface.launch(share=True) # Set share=True to host on Hugging Face Hub
|