text
stringlengths
0
2k
heading1
stringlengths
4
79
source_page_url
stringclasses
183 values
source_page_title
stringclasses
183 values
[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!
What is Comet?
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
First, install the dependencies needed to run these examples ```shell pip install comet_ml torch torchvision transformers gradio shap requests Pillow ``` Next, you will need to [sign up for a Comet Account](https://www.comet.com/signup?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs). Once you have your account set up, [grab your API Key](https://www.comet.com/docs/v2/guides/getting-started/quickstart/get-an-api-key?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) and configure your Comet credentials If you're running these examples as a script, you can either export your credentials as environment variables ```shell export COMET_API_KEY="<Your API Key>" export COMET_WORKSPACE="<Your Workspace Name>" export COMET_PROJECT_NAME="<Your Project Name>" ``` or set them in a `.comet.config` file in your working directory. You file should be formatted in the following way. ```shell [comet] api_key=<Your API Key> workspace=<Your Workspace Name> project_name=<Your Project Name> ``` If you are using the provided Colab Notebooks to run these examples, please run the cell with the following snippet before starting the Gradio UI. Running this cell allows you to interactively add your API key to the notebook. ```python import comet_ml comet_ml.init() ```
Setup
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Gradio_and_Comet.ipynb) In this example, we will go over how to log your Gradio Applications to Comet and interact with them using the Gradio Custom Panel. Let's start by building a simple Image Classification example using `resnet18`. ```python import comet_ml import requests import torch from PIL import Image from torchvision import transforms torch.hub.download_url_to_file("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") if torch.cuda.is_available(): device = "cuda" else: device = "cpu" model = torch.hub.load("pytorch/vision:v0.6.0", "resnet18", pretrained=True).eval() model = model.to(device) Download human-readable labels for ImageNet. response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def predict(inp): inp = Image.fromarray(inp.astype("uint8"), "RGB") inp = transforms.ToTensor()(inp).unsqueeze(0) with torch.no_grad(): prediction = torch.nn.functional.softmax(model(inp.to(device))[0], dim=0) return {labels[i]: float(prediction[i]) for i in range(1000)} inputs = gr.Image() outputs = gr.Label(num_top_classes=3) io = gr.Interface( fn=predict, inputs=inputs, outputs=outputs, examples=["dog.jpg"] ) io.launch(inline=False, share=True) experiment = comet_ml.Experiment() experiment.add_tag("image-classifier") io.integrate(comet_ml=experiment) ``` The last line in this snippet will log the URL of the Gradio Application to your Comet Experiment. You can find the URL in the Text Tab of your Experiment. <video width="560" height="315" controls> <source src="https://user-images.githubusercontent.com/7529846/214328034-09369d4d-8b94-4c4a-aa3c-25e3ed8394c4.mp4"></source> </video> Add the Gradio Panel to your Experiment to interact with your application. <video width=
1. Logging Gradio UI's to your Comet Experiments
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
r-images.githubusercontent.com/7529846/214328034-09369d4d-8b94-4c4a-aa3c-25e3ed8394c4.mp4"></source> </video> Add the Gradio Panel to your Experiment to interact with your application. <video width="560" height="315" controls> <source src="https://user-images.githubusercontent.com/7529846/214328194-95987f83-c180-4929-9bed-c8a0d3563ed7.mp4"></source> </video>
1. Logging Gradio UI's to your Comet Experiments
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=9" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> If you are permanently hosting your Gradio application, you can embed the UI using the Gradio Panel Extended custom Panel. Go to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. <img width="560" alt="adding-panels" src="https://user-images.githubusercontent.com/7529846/214329314-70a3ff3d-27fb-408c-a4d1-4b58892a3854.jpeg"> Next, search for Gradio Panel Extended in the Public Panels section and click `Add`. <img width="560" alt="gradio-panel-extended" src="https://user-images.githubusercontent.com/7529846/214325577-43226119-0292-46be-a62a-0c7a80646ebb.png"> Once you have added your Panel, click `Edit` to access to the Panel Options page and paste in the URL of your Gradio application. ![Edit-Gradio-Panel-Options](https://user-images.githubusercontent.com/7529846/214573001-23814b5a-ca65-4ace-a8a5-b27cdda70f7a.gif) <img width="560" alt="Edit-Gradio-Panel-URL" src="https://user-images.githubusercontent.com/7529846/214334843-870fe726-0aa1-4b21-bbc6-0c48f56c48d8.png">
2. Embedding Gradio Applications directly into your Comet Projects
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=107" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> You can also embed Gradio Applications that are hosted on Hugging Faces Spaces into your Comet Projects using the Hugging Face Spaces Panel. Go to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. Next, search for the Hugging Face Spaces Panel in the Public Panels section and click `Add`. <img width="560" height="315" alt="huggingface-spaces-panel" src="https://user-images.githubusercontent.com/7529846/214325606-99aa3af3-b284-4026-b423-d3d238797e12.png"> Once you have added your Panel, click Edit to access to the Panel Options page and paste in the path of your Hugging Face Space e.g. `pytorch/ResNet` <img width="560" height="315" alt="Edit-HF-Space" src="https://user-images.githubusercontent.com/7529846/214335868-c6f25dee-13db-4388-bcf5-65194f850b02.png">
3. Embedding Hugging Face Spaces directly into your Comet Projects
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=176" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb) In the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet. In the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/state) object. This will allow you to log multiple inferences from a model to a single Experiment. ```python import comet_ml import gradio as gr import shap import torch from transformers import AutoModelForCausalLM, AutoTokenizer if torch.cuda.is_available(): device = "cuda" else: device = "cpu" MODEL_NAME = "gpt2" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) set model decoder to true model.config.is_decoder = True set text-generation params under task_specific_params model.config.task_specific_params["text-generation"] = { "do_sample": True, "max_length": 50, "temperature": 0.7, "top_k": 50, "no_repeat_ngram_size": 2, } model = model.to(device) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) explainer = shap.Explainer(model, tokenizer) def start_experiment(): """Returns an APIExperiment object that is thread safe and can be used to log inferences to a single Experiment """ try: api = comet_ml.API() workspace = api.get_default_
4. Logging Model Inferences to Comet
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
"""Returns an APIExperiment object that is thread safe and can be used to log inferences to a single Experiment """ try: api = comet_ml.API() workspace = api.get_default_workspace() project_name = comet_ml.config.get_config()["comet.project_name"] experiment = comet_ml.APIExperiment( workspace=workspace, project_name=project_name ) experiment.log_other("Created from", "gradio-inference") message = f"Started Experiment: [{experiment.name}]({experiment.url})" return (experiment, message) except Exception as e: return None, None def predict(text, state, message): experiment = state shap_values = explainer([text]) plot = shap.plots.text(shap_values, display=False) if experiment is not None: experiment.log_other("message", message) experiment.log_html(plot) return plot with gr.Blocks() as demo: start_experiment_btn = gr.Button("Start New Experiment") experiment_status = gr.Markdown() Log a message to the Experiment to provide more context experiment_message = gr.Textbox(label="Experiment Message") experiment = gr.State() input_text = gr.Textbox(label="Input Text", lines=5, interactive=True) submit_btn = gr.Button("Submit") output = gr.HTML(interactive=True) start_experiment_btn.click( start_experiment, outputs=[experiment, experiment_status] ) submit_btn.click( predict, inputs=[input_text, experiment, experiment_message], outputs=[output] ) ``` Inferences from this snippet will be saved in the HTML tab of your experiment. <video width="560" height="315" controls> <source src="https://user-images.githubusercontent.com/7529846/214328610-466e5c81-4814-49b9-887c-065aca14dd30.mp4"></source> </video>
4. Logging Model Inferences to Comet
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
887c-065aca14dd30.mp4"></source> </video>
4. Logging Model Inferences to Comet
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
We hope you found this guide useful and that it provides some inspiration to help you build awesome model evaluation workflows with Comet and Gradio.
Conclusion
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
- Create an account on Hugging Face [here](https://huggingface.co/join). - Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face. - Request to join the Comet organization [here](https://huggingface.co/Comet).
How to contribute Gradio demos on HF spaces on the Comet organization
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)
Additional Resources
https://gradio.app/guides/Gradio-and-Comet
Other Tutorials - Gradio And Comet Guide
App-level parameters have been moved from `Blocks` to `launch()` The `gr.Blocks` class constructor previously contained several parameters that applied to your entire Gradio app, specifically: * `theme`: The theme for your Gradio app * `css`: Custom CSS code as a string * `css_paths`: Paths to custom CSS files * `js`: Custom JavaScript code * `head`: Custom HTML code to insert in the head of the page * `head_paths`: Paths to custom HTML files to insert in the head Since `gr.Blocks` can be nested and are not necessarily unique to a Gradio app, these parameters have now been moved to `Blocks.launch()`, which can only be called once for your entire Gradio app. **Before (Gradio 5.x):** ```python import gradio as gr with gr.Blocks( theme=gr.themes.Soft(), css=".my-class { color: red; }", ) as demo: gr.Textbox(label="Input") demo.launch() ``` **After (Gradio 6.x):** ```python import gradio as gr with gr.Blocks() as demo: gr.Textbox(label="Input") demo.launch( theme=gr.themes.Soft(), css=".my-class { color: red; }", ) ``` This change makes it clearer that these parameters apply to the entire app and not to individual `Blocks` instances. `show_api` parameter replaced with `footer_links` The `show_api` parameter in `launch()` has been replaced with a more flexible `footer_links` parameter that allows you to control which links appear in the footer of your Gradio app. **In Gradio 5.x:** - `show_api=True` (default) showed the API documentation link in the footer - `show_api=False` hid the API documentation link **In Gradio 6.x:** - `footer_links` accepts a list of strings: `["api", "gradio", "settings"]` - You can now control precisely which footer links are shown: - `"api"`: Shows the API documentation link - `"gradio"`: Shows the "Built with Gradio" link - `"settings"`: Shows the settings link **Before (Gradio 5.x):** ```python import gradio as gr with gr.Blocks() as demo: gr.Textbox(label="Input") demo.launch(sho
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
he "Built with Gradio" link - `"settings"`: Shows the settings link **Before (Gradio 5.x):** ```python import gradio as gr with gr.Blocks() as demo: gr.Textbox(label="Input") demo.launch(show_api=False) ``` **After (Gradio 6.x):** ```python import gradio as gr with gr.Blocks() as demo: gr.Textbox(label="Input") demo.launch(footer_links=["gradio", "settings"]) ``` To replicate the old behavior: - `show_api=True` → `footer_links=["api", "gradio", "settings"]` (or just omit the parameter, as this is the default) - `show_api=False` → `footer_links=["gradio", "settings"]` Event listener parameters: `show_api` removed and `api_name=False` no longer supported In event listeners (such as `.click()`, `.change()`, etc.), the `show_api` parameter has been removed, and `api_name` no longer accepts `False` as a valid value. These have been replaced with a new `api_visibility` parameter that provides more fine-grained control. **In Gradio 5.x:** - `show_api=True` (default) showed the endpoint in the API documentation - `show_api=False` hid the endpoint from API docs but still allowed downstream apps to use it - `api_name=False` completely disabled the API endpoint (no downstream apps could use it) **In Gradio 6.x:** - `api_visibility` accepts one of three string values: - `"public"`: The endpoint is shown in API docs and accessible to all (equivalent to old `show_api=True`) - `"undocumented"`: The endpoint is hidden from API docs but still accessible to downstream apps (equivalent to old `show_api=False`) - `"private"`: The endpoint is completely disabled and inaccessible (equivalent to old `api_name=False`) **Before (Gradio 5.x):** ```python import gradio as gr with gr.Blocks() as demo: btn = gr.Button("Click me") output = gr.Textbox() btn.click(fn=lambda: "Hello", outputs=output, show_api=False) demo.launch() ``` Or to completely disable the API: ```python btn.click(fn=lambda: "Hello", outputs=output, api_name=False) `
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
btn.click(fn=lambda: "Hello", outputs=output, show_api=False) demo.launch() ``` Or to completely disable the API: ```python btn.click(fn=lambda: "Hello", outputs=output, api_name=False) ``` **After (Gradio 6.x):** ```python import gradio as gr with gr.Blocks() as demo: btn = gr.Button("Click me") output = gr.Textbox() btn.click(fn=lambda: "Hello", outputs=output, api_visibility="undocumented") demo.launch() ``` Or to completely disable the API: ```python btn.click(fn=lambda: "Hello", outputs=output, api_visibility="private") ``` To replicate the old behavior: - `show_api=True` → `api_visibility="public"` (or just omit the parameter, as this is the default) - `show_api=False` → `api_visibility="undocumented"` - `api_name=False` → `api_visibility="private"` `like_user_message` moved from `.like()` event to constructor The `like_user_message` parameter has been moved from the `.like()` event listener to the Chatbot constructor. **Before (Gradio 5.x):** ```python chatbot = gr.Chatbot() chatbot.like(print_like_dislike, None, None, like_user_message=True) ``` **After (Gradio 6.x):** ```python chatbot = gr.Chatbot(like_user_message=True) chatbot.like(print_like_dislike, None, None) ``` Default API names for `Interface` and `ChatInterface` now use function names The default API endpoint names for `gr.Interface` and `gr.ChatInterface` have changed to be consistent with how `gr.Blocks` events work and to better support MCP (Model Context Protocol) tools. **In Gradio 5.x:** - `gr.Interface` had a default API name of `/predict` - `gr.ChatInterface` had a default API name of `/chat` **In Gradio 6.x:** - Both `gr.Interface` and `gr.ChatInterface` now use the name of the function you pass in as the default API endpoint name - This makes the API more descriptive and consistent with `gr.Blocks` behavior E.g. if your Gradio app is: ```python import gradio as gr def generate_text(prompt): return f"Generated: {prompt}"
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
This makes the API more descriptive and consistent with `gr.Blocks` behavior E.g. if your Gradio app is: ```python import gradio as gr def generate_text(prompt): return f"Generated: {prompt}" demo = gr.Interface(fn=generate_text, inputs="text", outputs="text") demo.launch() ``` Previously, the API endpoint that Gradio generated would be: `/predict`. Now, the API endpoint will be: `/generate_text` **To maintain the old endpoint names:** If you need to keep the old endpoint names for backward compatibility (e.g., if you have external services calling these endpoints), you can explicitly set the `api_name` parameter: ```python demo = gr.Interface(fn=generate_text, inputs="text", outputs="text", api_name="predict") ``` Similarly for `ChatInterface`: ```python demo = gr.ChatInterface(fn=chat_function, api_name="chat") ``` `gr.Chatbot` and `gr.ChatInterface` tuple format removed The tuple format for chatbot messages has been removed in Gradio 6.0. You must now use the messages format with dictionaries containing "role" and "content" keys. **In Gradio 5.x:** - You could use `type="tuples"` or the default tuple format: `[["user message", "assistant message"], ...]` - The tuple format was a list of lists where each inner list had two elements: `[user_message, assistant_message]` **In Gradio 6.x:** - Only the messages format is supported: `type="messages"` - Messages must be dictionaries with "role" and "content" keys: `[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"}]` **Before (Gradio 5.x):** ```python import gradio as gr Using tuple format chatbot = gr.Chatbot(value=[["Hello", "Hi there!"]]) ``` Or with `type="tuples"`: ```python chatbot = gr.Chatbot(value=[["Hello", "Hi there!"]], type="tuples") ``` **After (Gradio 6.x):** ```python import gradio as gr Must use messages format chatbot = gr.Chatbot( value=[ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi the
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
adio 6.x):** ```python import gradio as gr Must use messages format chatbot = gr.Chatbot( value=[ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"} ], type="messages" ) ``` Similarly for `gr.ChatInterface`, if you were manually setting the chat history: ```python Before (Gradio 5.x) demo = gr.ChatInterface( fn=chat_function, examples=[["Hello", "Hi there!"]] ) After (Gradio 6.x) demo = gr.ChatInterface( fn=chat_function, examples=[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi there!"}] ) ``` **Note:** If you're using `gr.ChatInterface` with a function that returns messages, the function should return messages in the new format. The tuple format is no longer supported. `gr.ChatInterface` `history` format now uses structured content The `history` format in `gr.ChatInterface` has been updated to consistently use OpenAI-style structured content format. Content is now always a list of content blocks, even for simple text messages. **In Gradio 5.x:** - Content could be a simple string: `{"role": "user", "content": "Hello"}` - Simple text messages used a string directly **In Gradio 6.x:** - Content is always a list of content blocks: `{"role": "user", "content": [{"type": "text", "text": "Hello"}]}` - This format is consistent with OpenAI's message format and supports multimodal content (text, images, etc.) **Before (Gradio 5.x):** ```python history = [ {"role": "user", "content": "What is the capital of France?"}, {"role": "assistant", "content": "Paris"} ] ``` **After (Gradio 6.x):** ```python history = [ {"role": "user", "content": [{"type": "text", "text": "What is the capital of France?"}]}, {"role": "assistant", "content": [{"type": "text", "text": "Paris"}]} ] ``` **With files:** When files are uploaded in the chat, they are represented as content blocks with `"type": "file"`. All content blocks (files and text) are gro
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
type": "text", "text": "Paris"}]} ] ``` **With files:** When files are uploaded in the chat, they are represented as content blocks with `"type": "file"`. All content blocks (files and text) are grouped together in the same message's content array: ```python history = [ { "role": "user", "content": [ {"type": "file", "file": {"path": "cat1.png"}}, {"type": "file", "file": {"path": "cat2.png"}}, {"type": "text", "text": "What's the difference between these two images?"} ] } ] ``` This structured format allows for multimodal content (text, images, files, etc.) in chat messages, making it consistent with OpenAI's API format. All files uploaded in a single message are grouped together in the `content` array along with any text content. `cache_examples` parameter updated and `cache_mode` introduced The `cache_examples` parameter (used in `Interface`, `ChatInterface`, and `Examples`) no longer accepts the string value `"lazy"`. It now strictly accepts boolean values (`True` or `False`). To control the caching strategy, a new `cache_mode` parameter has been introduced. **In Gradio 5.x:** - `cache_examples` accepted `True`, `False`, or `"lazy"`. **In Gradio 6.x:** - `cache_examples` only accepts `True` or `False`. - `cache_mode` accepts `"eager"` (default) or `"lazy"`. **Before (Gradio 5.x):** ```python import gradio as gr demo = gr.Interface( fn=predict, inputs="text", outputs="text", examples=["Hello", "World"], cache_examples="lazy" ) ``` **After (Gradio 6.x):** You must now set `cache_examples=True` and specify the mode separately: ```python import gradio as gr demo = gr.Interface( fn=predict, inputs="text", outputs="text", examples=["Hello", "World"], cache_examples=True, cache_mode="lazy" ) ``` If you previously used `cache_examples=True` (which implied eager caching), no changes are required, as `cache_mode` defaults to `"eager"`.
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
ld"], cache_examples=True, cache_mode="lazy" ) ``` If you previously used `cache_examples=True` (which implied eager caching), no changes are required, as `cache_mode` defaults to `"eager"`.
App-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
`gr.Video` no longer accepts tuple values for video and subtitles The tuple format for returning video with subtitles has been deprecated. Instead of returning a tuple `(video_path, subtitle_path)`, you should now use the `gr.Video` component directly with the `subtitles` parameter. **In Gradio 5.x:** - You could return a tuple of `(video_path, subtitle_path)` from a function - The tuple format was `(str | Path, str | Path | None)` **In Gradio 6.x:** - Return a `gr.Video` component instance with the `subtitles` parameter - This provides more flexibility and consistency with other components **Before (Gradio 5.x):** ```python import gradio as gr def generate_video_with_subtitles(input): video_path = "output.mp4" subtitle_path = "subtitles.srt" return (video_path, subtitle_path) demo = gr.Interface( fn=generate_video_with_subtitles, inputs="text", outputs=gr.Video() ) demo.launch() ``` **After (Gradio 6.x):** ```python import gradio as gr def generate_video_with_subtitles(input): video_path = "output.mp4" subtitle_path = "subtitles.srt" return gr.Video(value=video_path, subtitles=subtitle_path) demo = gr.Interface( fn=generate_video_with_subtitles, inputs="text", outputs=gr.Video() ) demo.launch() ``` `gr.HTML` `padding` parameter default changed to `False` The default value of the `padding` parameter in `gr.HTML` has been changed from `True` to `False` for consistency with `gr.Markdown`. **In Gradio 5.x:** - `padding=True` was the default for `gr.HTML` - HTML components had padding by default **In Gradio 6.x:** - `padding=False` is the default for `gr.HTML` - This matches the default behavior of `gr.Markdown` for consistency **To maintain the old behavior:** If you want to keep the padding that was present in Gradio 5.x, explicitly set `padding=True`: ```python html = gr.HTML("<div>Content</div>", padding=True) ``` `gr.Dataframe` `row_count` and `col_count` parameters restructured The `r
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
present in Gradio 5.x, explicitly set `padding=True`: ```python html = gr.HTML("<div>Content</div>", padding=True) ``` `gr.Dataframe` `row_count` and `col_count` parameters restructured The `row_count` and `col_count` parameters in `gr.Dataframe` have been restructured to provide more flexibility and clarity. The tuple format for specifying fixed/dynamic behavior has been replaced with separate parameters for initial counts and limits. **In Gradio 5.x:** - `row_count: int | tuple[int, str]` - Could be an int or tuple like `(5, "fixed")` or `(5, "dynamic")` - `col_count: int | tuple[int, str] | None` - Could be an int or tuple like `(3, "fixed")` or `(3, "dynamic")` **In Gradio 6.x:** - `row_count: int | None` - Just the initial number of rows to display - `row_limits: tuple[int | None, int | None] | None` - Tuple specifying (min_rows, max_rows) constraints - `column_count: int | None` - The initial number of columns to display - `column_limits: tuple[int | None, int | None] | None` - Tuple specifying (min_columns, max_columns) constraints **Before (Gradio 5.x):** ```python import gradio as gr Fixed number of rows (users can't add/remove rows) df = gr.Dataframe(row_count=(5, "fixed"), col_count=(3, "dynamic")) ``` Or with dynamic rows: ```python Dynamic rows (users can add/remove rows) df = gr.Dataframe(row_count=(5, "dynamic"), col_count=(3, "fixed")) ``` Or with just integers (defaults to dynamic): ```python df = gr.Dataframe(row_count=5, col_count=3) ``` **After (Gradio 6.x):** ```python import gradio as gr Fixed number of rows (users can't add/remove rows) df = gr.Dataframe(row_count=5, row_limits=(5, 5), column_count=3, column_limits=None) ``` Or with dynamic rows (users can add/remove rows): ```python Dynamic rows with no limits df = gr.Dataframe(row_count=5, row_limits=None, column_count=3, column_limits=None) ``` Or with min/max constraints: ```python Rows between 3 and 10, columns between 2 and 5 df = gr.Dataframe(row_count=
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
r.Dataframe(row_count=5, row_limits=None, column_count=3, column_limits=None) ``` Or with min/max constraints: ```python Rows between 3 and 10, columns between 2 and 5 df = gr.Dataframe(row_count=5, row_limits=(3, 10), column_count=3, column_limits=(2, 5)) ``` **Migration examples:** - `row_count=(5, "fixed")` → `row_count=5, row_limits=(5, 5)` - `row_count=(5, "dynamic")` → `row_count=5, row_limits=None` - `row_count=5` → `row_count=5, row_limits=None` (same behavior) - `col_count=(3, "fixed")` → `column_count=3, column_limits=(3, 3)` - `col_count=(3, "dynamic")` → `column_count=3, column_limits=None` - `col_count=3` → `column_count=3, column_limits=None` (same behavior) `allow_tags=True` is now the default for `gr.Chatbot` Due to the rise in LLMs returning HTML, markdown tags, and custom tags (such as `<thinking>` tags), the default value of `allow_tags` in `gr.Chatbot` has changed from `False` to `True` in Gradio 6. **In Gradio 5.x:** - `allow_tags=False` was the default - All HTML and custom tags were sanitized/removed from chatbot messages (unless explicitly allowed) **In Gradio 6.x:** - `allow_tags=True` is the default - All custom tags (non-standard HTML tags) are preserved in chatbot messages - Standard HTML tags are still sanitized for security unless `sanitize_html=False` **Before (Gradio 5.x):** ```python import gradio as gr chatbot = gr.Chatbot() ``` This would remove all tags from messages, including custom tags like `<thinking>`. **After (Gradio 6.x):** ```python import gradio as gr chatbot = gr.Chatbot() ``` This will now preserve custom tags like `<thinking>` in the messages. **To maintain the old behavior:** If you want to continue removing all tags from chatbot messages (the old default behavior), explicitly set `allow_tags=False`: ```python import gradio as gr chatbot = gr.Chatbot(allow_tags=False) ``` **Note:** You can also specify a list of specific tags to allow: ```python chatbot = gr.Chatbot(allow_tags=["thinking",
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
e`: ```python import gradio as gr chatbot = gr.Chatbot(allow_tags=False) ``` **Note:** You can also specify a list of specific tags to allow: ```python chatbot = gr.Chatbot(allow_tags=["thinking", "tool_call"]) ``` This will only preserve `<thinking>` and `<tool_call>` tags while removing all other custom tags. Other removed component parameters Several component parameters have been removed in Gradio 6.0. These parameters were previously deprecated and have now been fully removed. `gr.Chatbot` removed parameters **`bubble_full_width`** - This parameter has been removed as it no longer has any effect. **`resizeable`** - This parameter (with the typo) has been removed. Use `resizable` instead. **Before (Gradio 5.x):** ```python chatbot = gr.Chatbot(resizeable=True) ``` **After (Gradio 6.x):** ```python chatbot = gr.Chatbot(resizable=True) ``` **`show_copy_button`, `show_copy_all_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python chatbot = gr.Chatbot(show_copy_button=True, show_copy_all_button=True, show_share_button=True) ``` **After (Gradio 6.x):** ```python chatbot = gr.Chatbot(buttons=["copy", "copy_all", "share"]) ``` `gr.Audio` / `WaveformOptions` removed parameters **`show_controls`** - This parameter in `WaveformOptions` has been removed. Use `show_recording_waveform` instead. **Before (Gradio 5.x):** ```python audio = gr.Audio( waveform_options=gr.WaveformOptions(show_controls=False) ) ``` **After (Gradio 6.x):** ```python audio = gr.Audio( waveform_options=gr.WaveformOptions(show_recording_waveform=False) ) ``` **`min_length` and `max_length`** - These parameters have been removed. Use validators on event listeners instead. **Before (Gradio 5.x):** ```python audio = gr.Audio(min_length=1, max_length=10) ``` **After (Gradio 6.x):** ```python audio = gr.Audio() audio.upload( fn=process_audio, validator=lambda audio:
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
fore (Gradio 5.x):** ```python audio = gr.Audio(min_length=1, max_length=10) ``` **After (Gradio 6.x):** ```python audio = gr.Audio() audio.upload( fn=process_audio, validator=lambda audio: gr.validators.is_audio_correct_length(audio, min_length=1, max_length=10), inputs=audio ) ``` **`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python audio = gr.Audio(show_download_button=True, show_share_button=True) ``` **After (Gradio 6.x):** ```python audio = gr.Audio(buttons=["download", "share"]) ``` **Note:** For components where `show_share_button` had a default of `None` (which would show the button on Spaces), you can use `buttons=["share"]` to always show it, or omit it from the list to hide it. `gr.Image` removed parameters **`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead. **Before (Gradio 5.x):** ```python image = gr.Image(mirror_webcam=True) ``` **After (Gradio 6.x):** ```python image = gr.Image(webcam_options=gr.WebcamOptions(mirror=True)) ``` **`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead. **Before (Gradio 5.x):** ```python image = gr.Image(webcam_constraints={"facingMode": "user"}) ``` **After (Gradio 6.x):** ```python image = gr.Image(webcam_options=gr.WebcamOptions(constraints={"facingMode": "user"})) ``` **`show_download_button`, `show_share_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python image = gr.Image(show_download_button=True, show_share_button=True, show_fullscreen_button=True) ``` **After (Gradio 6.x):** ```python image = gr.Image(buttons=["download", "share", "fullscreen"]) ``` `gr.Video` removed parameters **`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.Web
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
``python image = gr.Image(buttons=["download", "share", "fullscreen"]) ``` `gr.Video` removed parameters **`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead. **Before (Gradio 5.x):** ```python video = gr.Video(mirror_webcam=True) ``` **After (Gradio 6.x):** ```python video = gr.Video(webcam_options=gr.WebcamOptions(mirror=True)) ``` **`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead. **Before (Gradio 5.x):** ```python video = gr.Video(webcam_constraints={"facingMode": "user"}) ``` **After (Gradio 6.x):** ```python video = gr.Video(webcam_options=gr.WebcamOptions(constraints={"facingMode": "user"})) ``` **`min_length` and `max_length`** - These parameters have been removed. Use validators on event listeners instead. **Before (Gradio 5.x):** ```python video = gr.Video(min_length=1, max_length=10) ``` **After (Gradio 6.x):** ```python video = gr.Video() video.upload( fn=process_video, validator=lambda video: gr.validators.is_video_correct_length(video, min_length=1, max_length=10), inputs=video ) ``` **`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python video = gr.Video(show_download_button=True, show_share_button=True) ``` **After (Gradio 6.x):** ```python video = gr.Video(buttons=["download", "share"]) ``` `gr.ImageEditor` removed parameters **`crop_size`** - This parameter has been removed. Use `canvas_size` instead. **Before (Gradio 5.x):** ```python editor = gr.ImageEditor(crop_size=(512, 512)) ``` **After (Gradio 6.x):** ```python editor = gr.ImageEditor(canvas_size=(512, 512)) ``` Removed components **`gr.LogoutButton`** - This component has been removed. Use `gr.LoginButton` instead, which handles both login and logout processes. **Before (Gradio 5.x):** ```python logout_btn = gr.LogoutButton() `
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
`gr.LogoutButton`** - This component has been removed. Use `gr.LoginButton` instead, which handles both login and logout processes. **Before (Gradio 5.x):** ```python logout_btn = gr.LogoutButton() ``` **After (Gradio 6.x):** ```python login_btn = gr.LoginButton() ``` Native plot components removed parameters The following parameters have been removed from `gr.LinePlot`, `gr.BarPlot`, and `gr.ScatterPlot`: - `overlay_point` - This parameter has been removed. - `width` - This parameter has been removed. Use CSS styling or container width instead. - `stroke_dash` - This parameter has been removed. - `interactive` - This parameter has been removed. - `show_actions_button` - This parameter has been removed. - `color_legend_title` - This parameter has been removed. Use `color_title` instead. - `show_fullscreen_button`, `show_export_button` - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python plot = gr.LinePlot( value=data, x="date", y="downloads", overlay_point=True, width=900, show_fullscreen_button=True, show_export_button=True ) ``` **After (Gradio 6.x):** ```python plot = gr.LinePlot( value=data, x="date", y="downloads", buttons=["fullscreen", "export"] ) ``` **Note:** For `color_legend_title`, use `color_title` instead: **Before (Gradio 5.x):** ```python plot = gr.ScatterPlot(color_legend_title="Category") ``` **After (Gradio 6.x):** ```python plot = gr.ScatterPlot(color_title="Category") ``` `gr.Textbox` removed parameters **`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python text = gr.Textbox(show_copy_button=True) ``` **After (Gradio 6.x):** ```python text = gr.Textbox(buttons=["copy"]) ``` `gr.Markdown` removed parameters **`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python markdow
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
buttons=["copy"]) ``` `gr.Markdown` removed parameters **`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python markdown = gr.Markdown(show_copy_button=True) ``` **After (Gradio 6.x):** ```python markdown = gr.Markdown(buttons=["copy"]) ``` `gr.Dataframe` removed parameters **`show_copy_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python df = gr.Dataframe(show_copy_button=True, show_fullscreen_button=True) ``` **After (Gradio 6.x):** ```python df = gr.Dataframe(buttons=["copy", "fullscreen"]) ``` `gr.Slider` removed parameters **`show_reset_button`** - This parameter has been removed. Use the `buttons` parameter instead. **Before (Gradio 5.x):** ```python slider = gr.Slider(show_reset_button=True) ``` **After (Gradio 6.x):** ```python slider = gr.Slider(buttons=["reset"]) ```
Component-level Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
`gradio sketch` command removed The `gradio sketch` command-line tool has been deprecated and completely removed in Gradio 6. This tool was used to create Gradio apps through a visual interface. **In Gradio 5.x:** - You could run `gradio sketch` to launch an interactive GUI for building Gradio apps - The tool would generate Python code visually **In Gradio 6.x:** - The `gradio sketch` command has been removed - Running `gradio sketch` will raise a `DeprecationWarning`
CLI Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
`hf_token` parameter renamed to `token` in `Client` The `hf_token` parameter in the `Client` class has been renamed to `token` for consistency and simplicity. **Before (Gradio 5.x):** ```python from gradio_client import Client client = Client("abidlabs/my-private-space", hf_token="hf_...") ``` **After (Gradio 6.x):** ```python from gradio_client import Client client = Client("abidlabs/my-private-space", token="hf_...") ``` `deploy_discord` method deprecated The `deploy_discord` method in the `Client` class has been deprecated and will be removed in Gradio 6.0. This method was used to deploy Gradio apps as Discord bots. **Before (Gradio 5.x):** ```python from gradio_client import Client client = Client("username/space-name") client.deploy_discord(discord_bot_token="...") ``` **After (Gradio 6.x):** The `deploy_discord` method is no longer available. Please see the [documentation on creating a Discord bot with Gradio](https://www.gradio.app/guides/creating-a-discord-bot-from-a-gradio-app) for alternative approaches. `AppError` now subclasses `Exception` instead of `ValueError` The `AppError` exception class in the Python client now subclasses `Exception` directly instead of `ValueError`. This is a breaking change if you have code that specifically catches `ValueError` to handle `AppError` instances. **Before (Gradio 5.x):** ```python from gradio_client import Client from gradio_client.exceptions import AppError try: client = Client("username/space-name") result = client.predict("/predict", inputs) except ValueError as e: This would catch AppError in Gradio 5.x print(f"Error: {e}") ``` **After (Gradio 6.x):** ```python from gradio_client import Client from gradio_client.exceptions import AppError try: client = Client("username/space-name") result = client.predict("/predict", inputs) except AppError as e: Explicitly catch AppError print(f"App error: {e}") except ValueError as e: This will no lon
Python Client Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
"username/space-name") result = client.predict("/predict", inputs) except AppError as e: Explicitly catch AppError print(f"App error: {e}") except ValueError as e: This will no longer catch AppError print(f"Value error: {e}") ```
Python Client Changes
https://gradio.app/guides/gradio-6-migration-guide
Other Tutorials - Gradio 6 Migration Guide Guide
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech. For example, given the sentence: > Does Chicago have any Pakistani restaurants? A named-entity recognition algorithm may identify: - "Chicago" as a **location** - "Pakistani" as an **ethnicity** and so on. Using `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team. Here is an example of a demo that you'll be able to build: $demo_ner_pipeline This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn! Prerequisites Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library. Approach 1: List of Entity Dictionaries Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate: ```py from transformers import pipeline ner_pipeline = pipeline("ner") ner_pipeline("Does Chicago have any Pakistani restaurants") ``` Output: ```bash [{'entity': 'I-LOC', 'score': 0.9988978, 'index': 2, 'word': 'Chicago', 'start': 5, 'end': 12}, {'entity': 'I-MISC', 'score': 0.9958592, 'index': 5, 'word': 'Pakistani', 'start': 22, 'end': 31}] ``` If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this
Introduction
https://gradio.app/guides/named-entity-recognition
Other Tutorials - Named Entity Recognition Guide
index': 5, 'word': 'Pakistani', 'start': 22, 'end': 31}] ``` If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `"entities"` and `"text"` respectively. Here is a complete example: $code_ner_pipeline $demo_ner_pipeline Approach 2: List of Tuples An alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities. In some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger: $code_text_analysis $demo_text_analysis --- And you're done! That's all you need to know to build a web-based GUI for your NER model. Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.
Introduction
https://gradio.app/guides/named-entity-recognition
Other Tutorials - Named Entity Recognition Guide
To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes. 1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/) 2. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one. 3. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "BigQuery API", click on it, and click the "Enable" button. If you see the "Manage" button, then the BigQuery is already enabled, and you're all set. 4. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button. 5. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as "BigQuery User", which will allow you to run queries. 6. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this: ```json { "type": "service_account", "project_id": "your project", "private_key_id": "your private key id", "private_key": "private key", "client_email": "email", "client_id": "client id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id" } ```
Setting up your BigQuery Credentials
https://gradio.app/guides/creating-a-dashboard-from-bigquery-data
Other Tutorials - Creating A Dashboard From Bigquery Data Guide
Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal: ```bash pip install google-cloud-bigquery[pandas] ``` You'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code: ```py from google.cloud import bigquery client = bigquery.Client.from_service_account_json("path/to/key.json") ``` With your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets. Here is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day: ```py import numpy as np QUERY = ( 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` ' 'ORDER BY date DESC,confirmed_cases DESC ' 'LIMIT 20') def run_query(): query_job = client.query(QUERY) query_result = query_job.result() df = query_result.to_dataframe() Select a subset of columns df = df[["confirmed_cases", "deaths", "county", "state_name"]] Convert numeric columns to standard numpy types df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64}) return df ```
Using the BigQuery Client
https://gradio.app/guides/creating-a-dashboard-from-bigquery-data
Other Tutorials - Creating A Dashboard From Bigquery Data Guide
Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly. Here is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\*60 seconds). ```py import gradio as gr with gr.Blocks() as demo: gr.DataFrame(run_query, every=gr.Timer(60*60)) demo.launch() ``` Perhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time by passing in the `every` parameter. Here is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame` ```py import gradio as gr with gr.Blocks() as demo: gr.Markdown("💉 Covid Dashboard (Updated Hourly)") with gr.Row(): gr.DataFrame(run_query, every=gr.Timer(60*60)) gr.ScatterPlot(run_query, every=gr.Timer(60*60), x="confirmed_cases", y="deaths", tooltip="county", width=500, height=500) demo.queue().launch() Run the demo with queuing enabled ```
Building the Real-Time Dashboard
https://gradio.app/guides/creating-a-dashboard-from-bigquery-data
Other Tutorials - Creating A Dashboard From Bigquery Data Guide
Let's go through a simple example to understand how to containerize a Gradio app using Docker. Step 1: Create Your Gradio App First, we need a simple Gradio app. Let's create a Python file named `app.py` with the following content: ```python import gradio as gr def greet(name): return f"Hello {name}!" iface = gr.Interface(fn=greet, inputs="text", outputs="text").launch() ``` This app creates a simple interface that greets the user by name. Step 2: Create a Dockerfile Next, we'll create a Dockerfile to specify how our app should be built and run in a Docker container. Create a file named `Dockerfile` in the same directory as your app with the following content: ```dockerfile FROM python:3.10-slim WORKDIR /usr/src/app COPY . . RUN pip install --no-cache-dir gradio EXPOSE 7860 ENV GRADIO_SERVER_NAME="0.0.0.0" CMD ["python", "app.py"] ``` This Dockerfile performs the following steps: - Starts from a Python 3.10 slim image. - Sets the working directory and copies the app into the container. - Installs Gradio (you should install all other requirements as well). - Exposes port 7860 (Gradio's default port). - Sets the `GRADIO_SERVER_NAME` environment variable to ensure Gradio listens on all network interfaces. - Specifies the command to run the app. Step 3: Build and Run Your Docker Container With the Dockerfile in place, you can build and run your container: ```bash docker build -t gradio-app . docker run -p 7860:7860 gradio-app ``` Your Gradio app should now be accessible at `http://localhost:7860`.
How to Dockerize a Gradio App
https://gradio.app/guides/deploying-gradio-with-docker
Other Tutorials - Deploying Gradio With Docker Guide
When running Gradio applications in Docker, there are a few important things to keep in mind: Running the Gradio app on `"0.0.0.0"` and exposing port 7860 In the Docker environment, setting `GRADIO_SERVER_NAME="0.0.0.0"` as an environment variable (or directly in your Gradio app's `launch()` function) is crucial for allowing connections from outside the container. And the `EXPOSE 7860` directive in the Dockerfile tells Docker to expose Gradio's default port on the container to enable external access to the Gradio app. Enable Stickiness for Multiple Replicas When deploying Gradio apps with multiple replicas, such as on AWS ECS, it's important to enable stickiness with `sessionAffinity: ClientIP`. This ensures that all requests from the same user are routed to the same instance. This is important because Gradio's communication protocol requires multiple separate connections from the frontend to the backend in order for events to be processed correctly. (If you use Terraform, you'll want to add a [stickiness block](https://registry.terraform.io/providers/hashicorp/aws/3.14.1/docs/resources/lb_target_groupstickiness) into your target group definition.) Deploying Behind a Proxy If you're deploying your Gradio app behind a proxy, like Nginx, it's essential to configure the proxy correctly. Gradio provides a [Guide that walks through the necessary steps](https://www.gradio.app/guides/running-gradio-on-your-web-server-with-nginx). This setup ensures your app is accessible and performs well in production environments.
Important Considerations
https://gradio.app/guides/deploying-gradio-with-docker
Other Tutorials - Deploying Gradio With Docker Guide
When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these "hard" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust. Gradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`.
Introduction
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file. There are [four parameters](https://gradio.app/docs/interfaceinitialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail. - `flagging_mode`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`. - `manual`: users will see a button to flag, and samples are only flagged when the button is clicked. - `auto`: users will not see a button to flag, but every sample will be flagged automatically. - `never`: users will not see a button to flag, and no sample will be flagged. - `flagging_options`: this parameter can be either `None` (default) or a list of strings. - If `None`, then the user simply clicks on the **Flag** button and no additional options are shown. - If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `["Incorrect", "Ambiguous"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `flagging_mode` is `"manual"`. - The chosen option is then logged along with the input and output. - `flagging_dir`: this parameter takes a string. - It represents what to name the directory where flagged data is stored. - `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class - Using this parameter allows you to write custom code that gets run whe
The **Flag** button in `gradio.Interface`
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
flagged data is stored. - `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class - Using this parameter allows you to write custom code that gets run when the flag button is clicked - By default, this is set to an instance of `gr.JSONLogger`
The **Flag** button in `gradio.Interface`
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
Within the directory provided by the `flagging_dir` argument, a JSON file will log the flagged data. Here's an example: The code below creates the calculator interface embedded below it: ```python import gradio as gr def calculator(num1, operation, num2): if operation == "add": return num1 + num2 elif operation == "subtract": return num1 - num2 elif operation == "multiply": return num1 * num2 elif operation == "divide": return num1 / num2 iface = gr.Interface( calculator, ["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"], "number", flagging_mode="manual" ) iface.launch() ``` <gradio-app space="gradio/calculator-flagging-basic"></gradio-app> When you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged. ```directory +-- flagged/ | +-- logs.csv ``` _flagged/logs.csv_ ```csv num1,operation,num2,Output,timestamp 5,add,7,12,2022-01-31 11:40:51.093412 6,subtract,1.5,4.5,2022-01-31 03:25:32.023542 ``` If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure. ```directory +-- flagged/ | +-- logs.csv | +-- image/ | | +-- 0.png | | +-- 1.png | +-- Output/ | | +-- 0.png | | +-- 1.png ``` _flagged/logs.csv_ ```csv im,Output timestamp im/0.png,Output/0.png,2022-02-04 19:49:58.026963 im/1.png,Output/1.png,2022-02-02 10:40:51.093412 ``` If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV. If we go back to the calculator example, the fo
What happens to flagged data?
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV. If we go back to the calculator example, the following code will create the interface embedded below it. ```python iface = gr.Interface( calculator, ["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"], "number", flagging_mode="manual", flagging_options=["wrong sign", "off by one", "other"] ) iface.launch() ``` <gradio-app space="gradio/calculator-flagging-options"></gradio-app> When users click the flag button, the csv file will now include a column indicating the selected option. _flagged/logs.csv_ ```csv num1,operation,num2,Output,flag,timestamp 5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412 6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512 ```
What happens to flagged data?
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility with Blocks -- you can write whatever Python code you want to run when a button is clicked, and assign that using the built-in events in Blocks. At the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code. This requires two steps: 1. You have to run your callback's `.setup()` somewhere in the code prior to the first time you flag data 2. When the flagging button is clicked, then you trigger the callback's `.flag()` method, making sure to collect the arguments correctly and disabling the typical preprocessing. Here is an example with an image sepia filter Blocks demo that lets you flag data using the default `CSVLogger`: $code_blocks_flag $demo_blocks_flag
Flagging with Blocks
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `flagging_mode=auto` (when all of the data submitted through the demo is being flagged) That's all! Happy building :)
Privacy
https://gradio.app/guides/using-flagging
Other Tutorials - Using Flagging Guide
A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts. Using virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others.
Virtual Environments
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
Other Tutorials - Installing Gradio In A Virtual Environment Guide
To install Gradio on a Windows system in a virtual environment, follow these steps: 1. **Install Python**: Ensure you have Python 3.10 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt. 2. **Create a Virtual Environment**: Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command: ```bash python -m venv gradio-env ``` This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation. 3. **Activate the Virtual Environment**: To activate the virtual environment, run: ```bash .\gradio-env\Scripts\activate ``` Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step. 4. **Install Gradio**: Now, you can install Gradio using pip: ```bash pip install gradio ``` 5. **Verification**: To verify the installation, run `python` and then type: ```python import gradio as gr print(gr.__version__) ``` This will display the installed version of Gradio.
Installing Gradio on Windows
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
Other Tutorials - Installing Gradio In A Virtual Environment Guide
The installation steps on MacOS and Linux are similar to Windows but with some differences in commands. 1. **Install Python**: Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps). Ensure you have Python 3.10 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/). 2. **Create a Virtual Environment**: Open Terminal and navigate to your project directory. Then create a virtual environment using: ```bash python -m venv gradio-env ``` Note: you can choose a different name than `gradio-env` for your virtual environment in this step. 3. **Activate the Virtual Environment**: To activate the virtual environment on MacOS/Linux, use: ```bash source gradio-env/bin/activate ``` 4. **Install Gradio**: With the virtual environment activated, install Gradio using pip: ```bash pip install gradio ``` 5. **Verification**: To verify the installation, run `python` and then type: ```python import gradio as gr print(gr.__version__) ``` This will display the installed version of Gradio. By following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects.
Installing Gradio on MacOS/Linux
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
Other Tutorials - Installing Gradio In A Virtual Environment Guide
Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/): 1\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the "Share" button in the top-right corner, and then click on the "Get shareable link" button. This will give you a URL that looks something like this: ```html https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0 ``` 2\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet): ```python import pandas as pd URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0" csv_url = URL.replace('/editgid=', '/export?format=csv&gid=') def get_data(): return pd.read_csv(csv_url) ``` 3\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code: ```python import gradio as gr with gr.Blocks() as demo: gr.Markdown("📈 Real-Time Line Plot") with gr.Row(): with gr.Column(): gr.DataFrame(get_data, every=gr.Timer(5)) with gr.Column(): gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500) demo.queue().launch() Run the demo with queuing enabled ``` And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
Public Google Sheets
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets. Authentication To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials): 1\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/) 2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one. 3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set. 4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button. 5\. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. **Note down the email of the service account** 6\. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this: ```json { "type": "service_account", "project_id": "your project", "private_key_id": "your private key id", "private_key": "private key", "client_email": "email", "client_id": "client id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id" } ```
Private Google Sheets
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id" } ``` Querying Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet: 1\. Click on the "Share" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the "Get shareable link" button. This will give you a URL that looks something like this: ```html https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0 ``` 2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread` 3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet): ```python import gspread import pandas as pd Authenticate with Google and get the sheet URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/editgid=0' gc = gspread.service_account("path/to/key.json") sh = gc.open_by_url(URL) worksheet = sh.sheet1 def get_data(): values = worksheet.get_all_values() df = pd.DataFrame(values[1:], columns=values[0]) return df ``` 4\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio cod
Private Google Sheets
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
. To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code: ```python import gradio as gr with gr.Blocks() as demo: gr.Markdown("📈 Real-Time Line Plot") with gr.Row(): with gr.Column(): gr.DataFrame(get_data, every=gr.Timer(5)) with gr.Column(): gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500) demo.queue().launch() Run the demo with queuing enabled ``` You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
Private Google Sheets
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.
Conclusion
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
First, we'll build the UI without handling these events and build from there. We'll use the Hugging Face InferenceClient in order to get started without setting up any API keys. This is what the first draft of our application looks like: ```python from huggingface_hub import InferenceClient import gradio as gr client = InferenceClient() def respond( prompt: str, history, ): if not history: history = [{"role": "system", "content": "You are a friendly chatbot"}] history.append({"role": "user", "content": prompt}) yield history response = {"role": "assistant", "content": ""} for message in client.chat_completion( type: ignore history, temperature=0.95, top_p=0.9, max_tokens=512, stream=True, model="openai/gpt-oss-20b" ): response["content"] += message.choices[0].delta.content or "" if message.choices else "" yield history + [response] with gr.Blocks() as demo: gr.Markdown("Chat with GPT-OSS 20b 🤗") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/376/hugging-face_1f917.png", ), ) prompt = gr.Textbox(max_lines=1, label="Chat Message") prompt.submit(respond, [prompt, chatbot], [chatbot]) prompt.submit(lambda: "", None, [prompt]) if __name__ == "__main__": demo.launch() ```
The UI
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
Our undo event will populate the textbox with the previous user message and also remove all subsequent assistant responses. In order to know the index of the last user message, we can pass `gr.UndoData` to our event handler function like so: ```python def handle_undo(history, undo_data: gr.UndoData): return history[:undo_data.index], history[undo_data.index]['content'][0]["text"] ``` We then pass this function to the `undo` event! ```python chatbot.undo(handle_undo, chatbot, [chatbot, prompt]) ``` You'll notice that every bot response will now have an "undo icon" you can use to undo the response - ![undo_event](https://github.com/user-attachments/assets/180b5302-bc4a-4c3e-903c-f14ec2adcaa6) Tip: You can also access the content of the user message with `undo_data.value`
The Undo Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
The retry event will work similarly. We'll use `gr.RetryData` to get the index of the previous user message and remove all the subsequent messages from the history. Then we'll use the `respond` function to generate a new response. We could also get the previous prompt via the `value` property of `gr.RetryData`. ```python def handle_retry(history, retry_data: gr.RetryData): new_history = history[:retry_data.index] previous_prompt = history[retry_data.index]['content'][0]["text"] yield from respond(previous_prompt, new_history) ... chatbot.retry(handle_retry, chatbot, chatbot) ``` You'll see that the bot messages have a "retry" icon now - ![retry_event](https://github.com/user-attachments/assets/cec386a7-c4cd-4fb3-a2d7-78fd806ceac6) Tip: The Hugging Face inference API caches responses, so in this demo, the retry button will not generate a new response.
The Retry Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
By now you should hopefully be seeing the pattern! To let users like a message, we'll add a `.like` event to our chatbot. We'll pass it a function that accepts a `gr.LikeData` object. In this case, we'll just print the message that was either liked or disliked. ```python def handle_like(data: gr.LikeData): if data.liked: print("You upvoted this response: ", data.value) else: print("You downvoted this response: ", data.value) chatbot.like(handle_like, None, None) ```
The Like Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
Same idea with the edit listener! with `gr.Chatbot(editable=True)`, you can capture user edits. The `gr.EditData` object tells us the index of the message edited and the new text of the mssage. Below, we use this object to edit the history, and delete any subsequent messages. ```python def handle_edit(history, edit_data: gr.EditData): new_history = history[:edit_data.index] new_history[-1]['content'] = [{"text": edit_data.value, "type": "text"}] return new_history ... chatbot.edit(handle_edit, chatbot, chatbot) ```
The Edit Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
As a bonus, we'll also cover the `.clear()` event, which is triggered when the user clicks the clear icon to clear all messages. As a developer, you can attach additional events that should happen when this icon is clicked, e.g. to handle clearing of additional chatbot state: ```python from uuid import uuid4 import gradio as gr def clear(): print("Cleared uuid") return uuid4() def chat_fn(user_input, history, uuid): return f"{user_input} with uuid {uuid}" with gr.Blocks() as demo: uuid_state = gr.State( uuid4 ) chatbot = gr.Chatbot() chatbot.clear(clear, outputs=[uuid_state]) gr.ChatInterface( chat_fn, additional_inputs=[uuid_state], chatbot=chatbot, ) demo.launch() ``` In this example, the `clear` function, bound to the `chatbot.clear` event, returns a new UUID into our session state, when the chat history is cleared via the trash icon. This can be seen in the `chat_fn` function, which references the UUID saved in our session state. This example also shows that you can use these events with `gr.ChatInterface` by passing in a custom `gr.Chatbot` object.
The Clear Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
That's it! You now know how you can implement the retry, undo, like, and clear events for the Chatbot.
Conclusion
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
Every element of the chatbot value is a dictionary of `role` and `content` keys. You can always use plain python dictionaries to add new values to the chatbot but Gradio also provides the `ChatMessage` dataclass to help you with IDE autocompletion. The schema of `ChatMessage` is as follows: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: MessageContent | [MessageContent] role: Literal["user", "assistant"] metadata: MetadataDict = None options: list[OptionDict] = None class MetadataDict(TypedDict): title: NotRequired[str] id: NotRequired[int | str] parent_id: NotRequired[int | str] log: NotRequired[str] duration: NotRequired[float] status: NotRequired[Literal["pending", "done"]] class OptionDict(TypedDict): label: NotRequired[str] value: str ``` For our purposes, the most important key is the `metadata` key, which accepts a dictionary. If this dictionary includes a `title` for the message, it will be displayed in a collapsible accordion representing a thought. It's that simple! Take a look at this example: ```python import gradio as gr with gr.Blocks() as demo: chatbot = gr.Chatbot( value=[ gr.ChatMessage( role="user", content="What is the weather in San Francisco?" ), gr.ChatMessage( role="assistant", content="I need to use the weather API tool?", metadata={"title": "🧠 Thinking"} ) ] ) demo.launch() ``` In addition to `title`, the dictionary provided to `metadata` can take several optional keys: * `log`: an optional string value to be displayed in a subdued font next to the thought title. * `duration`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title. * `status`: if set to `
The `ChatMessage` dataclass
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
tion`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title. * `status`: if set to `"pending"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `"done"`, the thought accordion is initialized closed. If `status` is not provided, the thought accordion is initialized open and no spinner is displayed. * `id` and `parent_id`: if these are provided, they can be used to nest thoughts inside other thoughts. Below, we show several complete examples of using `gr.Chatbot` and `gr.ChatInterface` to display tool use or thinking UIs.
The `ChatMessage` dataclass
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
A real example using transformers.agents We'll create a Gradio application simple agent that has access to a text-to-image tool. Tip: Make sure you read the [smolagents documentation](https://huggingface.co/docs/smolagents/index) first We'll start by importing the necessary classes from transformers and gradio. ```python import gradio as gr from gradio import ChatMessage from transformers import Tool, ReactCodeAgent type: ignore from transformers.agents import stream_to_gradio, HfApiEngine type: ignore Import tool from Hub image_generation_tool = Tool.from_space( space_id="black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generates an image following your prompt. Returns a PIL Image.", api_name="/infer", ) llm_engine = HfApiEngine("Qwen/Qwen2.5-Coder-32B-Instruct") Initialize the agent with both tools and engine agent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine) ``` Then we'll build the UI: ```python def interact_with_agent(prompt, history): messages = [] yield messages for msg in stream_to_gradio(agent, prompt): messages.append(asdict(msg)) yield messages yield messages demo = gr.ChatInterface( interact_with_agent, chatbot= gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png", ), ), examples=[ ["Generate an image of an astronaut riding an alligator"], ["I am writing a children's book for my daughter. Can you help me with some illustrations?"], ], ) ``` You can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py). ![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d) A real example using langchain agents We'll create a UI for langchain agent that has access to a search eng
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
om/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d) A real example using langchain agents We'll create a UI for langchain agent that has access to a search engine. We'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with the following environment variables set - ``` SERPAPI_API_KEY= HF_TOKEN= OPENAI_API_KEY= ``` ```python from langchain import hub from langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools from langchain_openai import ChatOpenAI from gradio import ChatMessage import gradio as gr from dotenv import load_dotenv load_dotenv() model = ChatOpenAI(temperature=0, streaming=True) tools = load_tools(["serpapi"]) Get the prompt to use - you can modify this! prompt = hub.pull("hwchase17/openai-tools-agent") agent = create_openai_tools_agent( model.with_config({"tags": ["agent_llm"]}), tools, prompt ) agent_executor = AgentExecutor(agent=agent, tools=tools).with_config( {"run_name": "Agent"} ) ``` Then we'll create the Gradio UI ```python async def interact_with_langchain_agent(prompt, messages): messages.append(ChatMessage(role="user", content=prompt)) yield messages async for chunk in agent_executor.astream( {"input": prompt} ): if "steps" in chunk: for step in chunk["steps"]: messages.append(ChatMessage(role="assistant", content=step.action.log, metadata={"title": f"🛠️ Used tool {step.action.tool}"})) yield messages if "output" in chunk: messages.append(ChatMessage(role="assistant", content=chunk["output"])) yield messages with gr.Blocks() as demo: gr.Markdown("Chat with a LangChain Agent 🦜⛓️ and see its thoughts 💭") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png",
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
🦜⛓️ and see its thoughts 💭") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png", ), ) input = gr.Textbox(lines=1, label="Chat Message") input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2]) demo.launch() ``` ![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67) That's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
The Gradio Chatbot can natively display intermediate thoughts of a _thinking_ LLM. This makes it perfect for creating UIs that show how an AI model "thinks" while generating responses. Below guide will show you how to build a chatbot that displays Gemini AI's thought process in real-time. A real example using Gemini 2.0 Flash Thinking API Let's create a complete chatbot that shows its thoughts and responses in real-time. We'll use Google's Gemini API for accessing Gemini 2.0 Flash Thinking LLM and Gradio for the UI. We'll begin with imports and setting up the gemini client. Note that you'll need to [acquire a Google Gemini API key](https://aistudio.google.com/apikey) first - ```python import gradio as gr from gradio import ChatMessage from typing import Iterator import google.generativeai as genai genai.configure(api_key="your-gemini-api-key") model = genai.GenerativeModel("gemini-2.0-flash-thinking-exp-1219") ``` First, let's set up our streaming function that handles the model's output: ```python def stream_gemini_response(user_message: str, messages: list) -> Iterator[list]: """ Streams both thoughts and responses from the Gemini model. """ Initialize response from Gemini response = model.generate_content(user_message, stream=True) Initialize buffers thought_buffer = "" response_buffer = "" thinking_complete = False Add initial thinking message messages.append( ChatMessage( role="assistant", content="", metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) ) for chunk in response: parts = chunk.candidates[0].content.parts current_chunk = parts[0].text if len(parts) == 2 and not thinking_complete: Complete thought and start response thought_buffer += current_chunk messages[-1] = ChatMessage( rol
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
if len(parts) == 2 and not thinking_complete: Complete thought and start response thought_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=thought_buffer, metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) Add response message messages.append( ChatMessage( role="assistant", content=parts[1].text ) ) thinking_complete = True elif thinking_complete: Continue streaming response response_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=response_buffer ) else: Continue streaming thoughts thought_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=thought_buffer, metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) yield messages ``` Then, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Gemini 2.0 Flash and See its Thoughts 💭") chatbot = gr.Chatbot( label="Gemini2.0 'Thinking' Chatbot", render_markdown=True, ) input_box = gr.Textbox( lines=1, label="Chat Message", placeholder="Type your message here and press Enter..." ) Set up event handlers msg_store = gr.State("") Store for preserving user message input_box.submit( lambda msg: (msg, msg, ""), Store message and clear input inputs=[input_box], outputs=[msg_store, input_box, input_box], queue=Fa
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
message input_box.submit( lambda msg: (msg, msg, ""), Store message and clear input inputs=[input_box], outputs=[msg_store, input_box, input_box], queue=False ).then( user_message, Add user message to chat inputs=[msg_store, chatbot], outputs=[input_box, chatbot], queue=False ).then( stream_gemini_response, Generate and stream response inputs=[msg_store, chatbot], outputs=chatbot ) demo.launch() ``` This creates a chatbot that: - Displays the model's thoughts in a collapsible section - Streams the thoughts and final response in real-time - Maintains a clean chat history That's it! You now have a chatbot that not only responds to users but also shows its thinking process, creating a more transparent and engaging interaction. See our finished Gemini 2.0 Flash Thinking demo [here](https://huggingface.co/spaces/ysharma/Gemini2-Flash-Thinking). Building with Citations The Gradio Chatbot can display citations from LLM responses, making it perfect for creating UIs that show source documentation and references. This guide will show you how to build a chatbot that displays Claude's citations in real-time. A real example using Anthropic's Citations API Let's create a complete chatbot that shows both responses and their supporting citations. We'll use Anthropic's Claude API with citations enabled and Gradio for the UI. We'll begin with imports and setting up the Anthropic client. Note that you'll need an `ANTHROPIC_API_KEY` environment variable set: ```python import gradio as gr import anthropic import base64 from typing import List, Dict, Any client = anthropic.Anthropic() ``` First, let's set up our message formatting functions that handle document preparation: ```python def encode_pdf_to_base64(file_obj) -> str: """Convert uploaded PDF file to base64 string.""" if file_obj is None: return None with open(file_obj.na
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
document preparation: ```python def encode_pdf_to_base64(file_obj) -> str: """Convert uploaded PDF file to base64 string.""" if file_obj is None: return None with open(file_obj.name, 'rb') as f: return base64.b64encode(f.read()).decode('utf-8') def format_message_history( history: list, enable_citations: bool, doc_type: str, text_input: str, pdf_file: str ) -> List[Dict]: """Convert Gradio chat history to Anthropic message format.""" formatted_messages = [] Add previous messages for msg in history[:-1]: if msg["role"] == "user": formatted_messages.append({"role": "user", "content": msg["content"]}) Prepare the latest message with document latest_message = {"role": "user", "content": []} if enable_citations: if doc_type == "plain_text": latest_message["content"].append({ "type": "document", "source": { "type": "text", "media_type": "text/plain", "data": text_input.strip() }, "title": "Text Document", "citations": {"enabled": True} }) elif doc_type == "pdf" and pdf_file: pdf_data = encode_pdf_to_base64(pdf_file) if pdf_data: latest_message["content"].append({ "type": "document", "source": { "type": "base64", "media_type": "application/pdf", "data": pdf_data }, "title": pdf_file.name, "citations": {"enabled": True} }) Add the user's question latest_message["content"].append({"type": "text", "text": history[-1]["content"]}) formatted_messages.append(latest_message) return formatted_messages ``` Then, let's create our bot resp
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
latest_message["content"].append({"type": "text", "text": history[-1]["content"]}) formatted_messages.append(latest_message) return formatted_messages ``` Then, let's create our bot response handler that processes citations: ```python def bot_response( history: list, enable_citations: bool, doc_type: str, text_input: str, pdf_file: str ) -> List[Dict[str, Any]]: try: messages = format_message_history(history, enable_citations, doc_type, text_input, pdf_file) response = client.messages.create(model="claude-3-5-sonnet-20241022", max_tokens=1024, messages=messages) Initialize main response and citations main_response = "" citations = [] Process each content block for block in response.content: if block.type == "text": main_response += block.text if enable_citations and hasattr(block, 'citations') and block.citations: for citation in block.citations: if citation.cited_text not in citations: citations.append(citation.cited_text) Add main response history.append({"role": "assistant", "content": main_response}) Add citations in a collapsible section if enable_citations and citations: history.append({ "role": "assistant", "content": "\n".join([f"• {cite}" for cite in citations]), "metadata": {"title": "📚 Citations"} }) return history except Exception as e: history.append({ "role": "assistant", "content": "I apologize, but I encountered an error while processing your request." }) return history ``` Finally, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Citations") with gr.Row(sc
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
your request." }) return history ``` Finally, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Citations") with gr.Row(scale=1): with gr.Column(scale=4): chatbot = gr.Chatbot(bubble_full_width=False, show_label=False, scale=1) msg = gr.Textbox(placeholder="Enter your message here...", show_label=False, container=False) with gr.Column(scale=1): enable_citations = gr.Checkbox(label="Enable Citations", value=True, info="Toggle citation functionality" ) doc_type_radio = gr.Radio( choices=["plain_text", "pdf"], value="plain_text", label="Document Type", info="Choose the type of document to use") text_input = gr.Textbox(label="Document Content", lines=10, info="Enter the text you want to reference") pdf_input = gr.File(label="Upload PDF", file_types=[".pdf"], file_count="single", visible=False) Handle message submission msg.submit( user_message, [msg, chatbot, enable_citations, doc_type_radio, text_input, pdf_input], [msg, chatbot] ).then( bot_response, [chatbot, enable_citations, doc_type_radio, text_input, pdf_input], chatbot ) demo.launch() ``` This creates a chatbot that: - Supports both plain text and PDF documents for Claude to cite from - Displays Citations in collapsible sections using our `metadata` feature - Shows source quotes directly from the given documents The citations feature works particularly well with the Gradio Chatbot's `metadata` support, allowing us to create collapsible sections that keep the chat interface clean while still providing easy access to source documentation. That's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/a
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
tbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/anthropic-citations-with-gradio-metadata-key).
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI. This tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization. **Prerequisites**: please make sure you are using the latest version of Gradio: ```bash $ pip install --upgrade gradio ```
Introduction
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token: ```python import gradio as gr gr.load_chat("http://localhost:11434/v1/", model="llama3.2", token="***").launch() ``` Read about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!
Note for OpenAI-API compatible endpoints
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order). - `message`: a `str` representing the user's most recent message. - `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata. The `history` would look like this: ```python [ {"role": "user", "content": [{"type": "text", "text": "What is the capital of France?"}]}, {"role": "assistant", "content": [{"type": "text", "text": "Paris"}]} ] ``` while the next `message` would be: ```py "And what is its largest city?" ``` Your chat function simply needs to return: * a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case: ``` Paris is also the largest city. ``` Let's take a look at a few example chat functions: **Example: a chatbot that randomly responds with yes or no** Let's write a chat function that responds `Yes` or `No` randomly. Here's our chat function: ```python import random def random_response(message, history): return random.choice(["Yes", "No"]) ``` Now, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface: ```python import gradio as gr gr.ChatInterface( fn=random_response, ).launch() ``` That's it! Here's our running demo, try it out: $demo_chatinterface_random_response **Example: a chatbot that alternates between agreeing and disagreeing** Of course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history. ```python import gradio as gr def alternatingl
Defining a chat function
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
t take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history. ```python import gradio as gr def alternatingly_agree(message, history): if len([h for h in history if h['role'] == "assistant"]) % 2 == 0: return f"Yes, I do think that: {message}" else: return "I don't think so" gr.ChatInterface( fn=alternatingly_agree, ).launch() ``` We'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples).
Defining a chat function
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple! ```python import time import gradio as gr def slow_echo(message, history): for i in range(len(message)): time.sleep(0.3) yield "You typed: " + message[: i+1] gr.ChatInterface( fn=slow_echo, ).launch() ``` While the response is streaming, the "Submit" button turns into a "Stop" button that can be used to stop the generator function. Tip: Even though you are yielding the latest message at each iteration, Gradio only sends the "diff" of each message from the server to the frontend, which reduces latency and data consumption over your network.
Streaming chatbots
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can: - add a title and description above your chatbot using `title` and `description` arguments. - add a theme or custom css using `theme` and `css` arguments respectively in the `launch()` method. - add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out. - customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder). **Adding examples** You can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as "buttons" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{"text": "What's in this image?", "files": ["cheetah.jpg"]}`. Each file will be a separate message that is added to your Chatbot history. You can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list. If you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`. **Customizing the chatbot or textbox component** If you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section: ```python import gradio as gr def yes_man(message, history): if message.endswith("?"): return "Yes" else:
Customizing the Chat UI
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
le of how we to apply the parameters we've discussed in this section: ```python import gradio as gr def yes_man(message, history): if message.endswith("?"): return "Yes" else: return "Ask me anything!" gr.ChatInterface( yes_man, chatbot=gr.Chatbot(height=300), textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7), title="Yes Man", description="Ask Yes Man any question", examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"], cache_examples=True, ).launch(theme="ocean") ``` Here's another example that adds a "placeholder" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML: ```python gr.ChatInterface( yes_man, chatbot=gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything"), ... ``` The placeholder appears vertically and horizontally centered in the chatbot.
Customizing the Chat UI
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot "multimodal" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class. When `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: ```py { "text": "user input", "files": [ "updated_file_1_path.ext", "updated_file_2_path.ext", ... ] } ``` This second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key will be a dictionary with a "type" key whose value is "file" and the file will be represented as a dictionary. All the files will be grouped in message in the history. So after uploading two files and asking a question, your history might look like this: ```python [ {"role": "user", "content": [{"type": "file", "file": {"path": "cat1.png"}}, {"type": "file", "file": {"path": "cat1.png"}}, {"type": "text", "text": "What's the difference between these two images?"}]} ] ``` The return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses). If you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to
Multimodal Chat Interface
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
ox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface: ```python import gradio as gr def count_images(message, history): num_images = len(message["files"]) total_images = 0 for message in history: for content in message["content"]: if content["type"] == "file": total_images += 1 return f"You just uploaded {num_images} images, total uploaded: {total_images+num_images}" demo = gr.ChatInterface( fn=count_images, examples=[ {"text": "No files", "files": []} ], multimodal=True, textbox=gr.MultimodalTextbox(file_count="multiple", file_types=["image"], sources=["upload", "microphone"]) ) demo.launch() ```
Multimodal Chat Interface
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components. The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. Here's a complete example: $code_chatinterface_system_prompt If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath. ```python import gradio as gr import time def echo(message, history, system_prompt, tokens): response = f"System prompt: {system_prompt}\n Message: {message}." for i in range(min(len(response), int(tokens))): time.sleep(0.05) yield response[: i+1] with gr.Blocks() as demo: system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt") slider = gr.Slider(10, 100, render=False) gr.ChatInterface( echo, additional_inputs=[system_prompt, slider], ) demo.launch() ``` **Examples with additional inputs** You can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example v
Additional Inputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
s to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface. If you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).
Additional Inputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component: $code_chatinterface_artifacts **Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.
Additional Outputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below: **Returning files or Gradio components** Currently, the following Gradio components can be displayed inside the chat interface: * `gr.Image` * `gr.Plot` * `gr.Audio` * `gr.HTML` * `gr.Video` * `gr.Gallery` * `gr.File` Simply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file: ```py import gradio as gr def music(message, history): if message.strip(): return gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav") else: return "Please provide the name of an artist" gr.ChatInterface( music, textbox=gr.Textbox(placeholder="Which artist's music do you want to listen to?", scale=7), ).launch() ``` Similarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component. **Returning Multiple Messages** You can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example: $code_chatinterface_echo_multimodal **Displaying intermediate thoughts or tool usage** The `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png) To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: Me
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
ma of the `gr.ChatMessage` data class as well as two internal typed dictionaries: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: MessageContent | list[MessageContent] metadata: MetadataDict = None options: list[OptionDict] = None class MetadataDict(TypedDict): title: NotRequired[str] id: NotRequired[int | str] parent_id: NotRequired[int | str] log: NotRequired[str] duration: NotRequired[float] status: NotRequired[Literal["pending", "done"]] class OptionDict(TypedDict): label: NotRequired[str] value: str ``` As you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a "content" key that refers to the chat message content. But it also includes a "metadata" key whose value is a dictionary. If this dictionary includes a "title" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage: $code_chatinterface_thoughts You can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include "id" and "parent_id" keys in the "metadata" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples. **Providing preset responses** When returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses. As shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an opt
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). This example illustrates how to use preset responses: $code_chatinterface_options
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations: $code_chatinterface_prefill
Modifying the Chatbot Value Directly
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API. The API route will be the name of the function you pass to the ChatInterface. So if `gr.ChatInterface(respond)`, then the API route is `/respond`. The endpoint just expects the user's message and will return the response, internally keeping track of the message history. ![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f) To use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a: * Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app) * Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)
Using Your Chatbot via API
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories. To enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.
Chat History
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode="manual")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). You can also change the feedback options via `flagging_options` parameter. The default options are "Like" and "Dislike", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled: $code_chatinterface_streaming_echo Note that in this example, we set several flagging options: "Like", "Spam", "Inappropriate", "Other". Because the case-sensitive string "Like" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.
Collecting User Feedback
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following: * [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries. * If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks). * Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).
What's Next?
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
The chat widget appears as a small button in the corner of your website. When clicked, it opens a chat interface that communicates with your Gradio app via the JavaScript Client API. Users can ask questions and receive responses directly within the widget.
How does it work?
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
* A running Gradio app (local or on Hugging Face Spaces). In this example, we'll use the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which helps generate code for Gradio apps based on natural language descriptions. 1. Create and Style the Chat Widget First, add this HTML and CSS to your website: ```html <div id="chat-widget" class="chat-widget"> <button id="chat-toggle" class="chat-toggle">💬</button> <div id="chat-container" class="chat-container hidden"> <div id="chat-header"> <h3>Gradio Assistant</h3> <button id="close-chat">×</button> </div> <div id="chat-messages"></div> <div id="chat-input-area"> <input type="text" id="chat-input" placeholder="Ask a question..."> <button id="send-message">Send</button> </div> </div> </div> <style> .chat-widget { position: fixed; bottom: 20px; right: 20px; z-index: 1000; } .chat-toggle { width: 50px; height: 50px; border-radius: 50%; background: 007bff; border: none; color: white; font-size: 24px; cursor: pointer; } .chat-container { position: fixed; bottom: 80px; right: 20px; width: 300px; height: 400px; background: white; border-radius: 10px; box-shadow: 0 0 10px rgba(0,0,0,0.1); display: flex; flex-direction: column; } .chat-container.hidden { display: none; } chat-header { padding: 10px; background: 007bff; color: white; border-radius: 10px 10px 0 0; display: flex; justify-content: space-between; align-items: center; } chat-messages { flex-grow: 1; overflow-y: auto; padding: 10px; } chat-input-area { padding: 10px; border-top: 1px solid eee; display: flex; } chat-input { flex-grow: 1; padding: 8px; border: 1px solid ddd; border-radius: 4px; margin-right: 8px; } .message { margin: 8px 0; pad
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
solid eee; display: flex; } chat-input { flex-grow: 1; padding: 8px; border: 1px solid ddd; border-radius: 4px; margin-right: 8px; } .message { margin: 8px 0; padding: 8px; border-radius: 4px; } .user-message { background: e9ecef; margin-left: 20px; } .bot-message { background: f8f9fa; margin-right: 20px; } </style> ``` 2. Add the JavaScript Then, add the following JavaScript code (which uses the Gradio JavaScript Client to connect to the Space) to your website by including this in the `<head>` section of your website: ```html <script type="module"> import { Client } from "https://cdn.jsdelivr.net/npm/@gradio/client/dist/index.min.js"; async function initChatWidget() { const client = await Client.connect("https://abidlabs-gradio-playground-bot.hf.space"); const chatToggle = document.getElementById('chat-toggle'); const chatContainer = document.getElementById('chat-container'); const closeChat = document.getElementById('close-chat'); const chatInput = document.getElementById('chat-input'); const sendButton = document.getElementById('send-message'); const messagesContainer = document.getElementById('chat-messages'); chatToggle.addEventListener('click', () => { chatContainer.classList.remove('hidden'); }); closeChat.addEventListener('click', () => { chatContainer.classList.add('hidden'); }); async function sendMessage() { const userMessage = chatInput.value.trim(); if (!userMessage) return; appendMessage(userMessage, 'user'); chatInput.value = ''; try { const result = await client.predict("/chat", { message: {"text": userMessage, "files": []} }); const message = result.data[0]; console.log(result.data[0]
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
client.predict("/chat", { message: {"text": userMessage, "files": []} }); const message = result.data[0]; console.log(result.data[0]); const botMessage = result.data[0].join('\n'); appendMessage(botMessage, 'bot'); } catch (error) { console.error('Error:', error); appendMessage('Sorry, there was an error processing your request.', 'bot'); } } function appendMessage(text, sender) { const messageDiv = document.createElement('div'); messageDiv.className = `message ${sender}-message`; if (sender === 'bot') { messageDiv.innerHTML = marked.parse(text); } else { messageDiv.textContent = text; } messagesContainer.appendChild(messageDiv); messagesContainer.scrollTop = messagesContainer.scrollHeight; } sendButton.addEventListener('click', sendMessage); chatInput.addEventListener('keypress', (e) => { if (e.key === 'Enter') sendMessage(); }); } initChatWidget(); </script> ``` 3. That's it! Your website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app. Customization You can customize the appearance of the widget by modifying the CSS. Some ideas: - Change the colors to match your website's theme - Adjust the size and position of the widget - Add animations for opening/closing - Modify the message styling ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif) If you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif) If you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
**Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast). This tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this: $demo_chatbot_streaming **Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo. You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
Introduction
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
Let's start with recreating the simple demo above. As you may have noticed, our bot simply randomly responds "How are you?", "Today is a great day", or "I'm very hungry" to any input. Here's the code to create this with Gradio: $code_chatbot_simple There are three Gradio components here: - A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot. - A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response - A `ClearButton` button to clear the Textbox and entire Chatbot history We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns. Of course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response. $demo_chatbot_simple Tip: For better type hinting and auto-completion in your IDE, you can use the `gr.ChatMessage` dataclass: ```python from gradio import ChatMessage def chat_function(message, history): history.append(ChatMessage(role="user", content=message)) history.append(ChatMessage(role="assistant", content="Hello, how can I help you?")) return history ```
A Simple Chatbot Demo
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that: $code_chatbot_streaming You'll notice that when a user submits their message, we now _chain_ two event events with `.then()`: 1. The first method `user()` updates the chatbot with the user message and clears the input field. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `{"role": "user", "content": user_message}`. 2. The second method, `bot()` updates the chatbot history with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/iterative-outputs). Of course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
Add Streaming to your Chatbot
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
The `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this: ```py def bot(history): response = {"role": "assistant", "content": "**That's cool!**"} history.append(response) return history ``` In addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. To pass in a media file, we must pass in the file a dictionary with a `path` key pointing to a local file and an `alt_text` key. The `alt_text` is optional, so you can also just pass in a tuple with a single element `{"path": "filepath"}`, like this: ```python def add_message(history, message): for x in message["files"]: history.append({"role": "user", "content": {"path": x}}) if message["text"] is not None: history.append({"role": "user", "content": message["text"]}) return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=["image"], sources=["upload", "microphone"]) ``` Putting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before: $code_chatbot_multimodal $demo_chatbot_multimodal And you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible: - [gradio/chatbot_streaming](https://huggingface.co/spaces/gradio/chatbot_streaming): A streaming chatbot demo built with `gr.Chatbot` and Blocks. - [gradio/chatbot_examples](https://huggingface.co/spaces/gradio/chatbot_examples): A chatbo
Adding Markdown, Images, Audio, or Videos
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide