Spaces:
Sleeping
Sleeping
| import gradio as gr | |
| import json | |
| import requests | |
| import openai | |
| def response_print(model_list, response_list): | |
| answer = "" | |
| cot = "CoT" | |
| none = "None" | |
| for idx in range(len(model_list)): | |
| answer = answer + f"# {model_list[idx]}'s response: {cot if response_list else none}\n" | |
| return answer | |
| TITLE = """<h1 align="center">LLM Agora ๐ฃ๏ธ๐ฆ</h1>""" | |
| INTRODUCTION_TEXT = """ | |
| The **LLM Agora ๐ฃ๏ธ๐ฆ** aims to improve the quality of open-source LMs' responses through debate & revision introduced in [Improving Factuality and Reasoning in Language Models through Multiagent Debate](https://arxiv.org/abs/2305.14325). | |
| Do you know that? ๐ค **LLMs can also improve their responses by debating with other LLMs**! ๐ฎ We applied this concept to several open-source LMs to verify that the open-source model, not the proprietary one, can sufficiently improve the response through discussion. ๐ค | |
| For more details, please refer to the GitHub Repository below. | |
| You can use LLM Agora with your own questions if the response of open-source LM is not satisfactory and you want to improve the quality! | |
| The Math, GSM8K, and MMLU Tabs show the results of the experiment, and for inference, please use the 'Inference' tab. | |
| Please check the more specific information in [GitHub Repository](https://github.com/gauss5930/LLM-Agora)! | |
| """ | |
| with gr.Blocks() as demo: | |
| gr.HTML(TITLE) | |
| gr.Markdown(INTRODUCTION_TEXT) | |
| with gr.Tab("Inference"): | |
| gr.CheckboxGroup(["Llama2", "Alpaca", "Vicuna", "Koala", "Falcon", "Baize", "WizardLM", "Orca", "phi-1.5"], label="Model Selection", info="Choose 3 LMs to participate in LLM Agora."), | |
| gr.Checkbox(label="CoT", info="Do you want to use CoT for inference?") | |
| with gr.Tab("Math"): | |
| text | |
| with gr.Tab("GSM8K"): | |
| with gr.Tab("MMLU"): | |
| demo.launch() |