[{"text": "If you already have a recent version of `gradio`, then the `gradio_client` is\nincluded as a dependency. But note that this documentation reflects the latest\nversion of the `gradio_client`, so upgrade if you\u2019re not sure!\n\nThe lightweight `gradio_client` package can be installed from pip (or pip3)\nand is tested to work with **Python versions 3.9 or higher** :\n\n \n \n $ pip install --upgrade gradio_client\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Spaces\n\nStart by connecting instantiating a `Client` object and connecting it to a\nGradio app that is running on Hugging Face Spaces.\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/en2fr\") a Space that translates from English to French\n\nYou can also connect to private Spaces by passing in your HF token with the\n`hf_token` parameter. You can get your HF token here:\n\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/my-private-space\", hf_token=\"...\")\n\n", "heading1": "Connecting to a Gradio App on Hugging Face", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "use\n\nWhile you can use any public Space as an API, you may get rate limited by\nHugging Face if you make too many requests. For unlimited usage of a Space,\nsimply duplicate the Space to create a private Space, and then use it to make\nas many requests as you\u2019d like!\n\nThe `gradio_client` includes a class method: `Client.duplicate()` to make this\nprocess simple (you\u2019ll need to pass in your [Hugging Face\ntoken](https://huggingface.co/settings/tokens) or be logged in using the\nHugging Face CLI):\n\n \n \n import os\n from gradio_client import Client, file\n \n HF_TOKEN = os.environ.get(\"HF_TOKEN\")\n \n client = Client.duplicate(\"abidlabs/whisper\", hf_token=HF_TOKEN)\n client.predict(file(\"audio_sample.wav\"))\n \n >> \"This is a test of the whisper speech recognition model.\"\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_\ncreate a new Space. Instead, the Client will attach to the previously-created\nSpace. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well,\nand your Hugging Face account will get billed based on the price of the GPU.\nTo minimize charges, your Space will automatically go to sleep after 1 hour of\ninactivity. You can also set the hardware using the `hardware` parameter of\n`duplicate()`.\n\n", "heading1": "Duplicating a Space for private", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "app\n\nIf your app is running somewhere else, just provide the full URL instead,\nincluding the \u201chttp://\u201d or \u201chttps://\u201c. Here\u2019s an example of making predictions\nto a Gradio app that is running on a share URL:\n\n \n \n from gradio_client import Client\n \n client = Client(\"https://bec81a83-5b5c-471e.gradio.live\")\n\n", "heading1": "Connecting a general Gradio", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are\navailable to you by calling the `Client.view_api()` method. For the Whisper\nSpace, we see the following:\n\n \n \n Client.predict() Usage Info\n ---------------------------\n Named API endpoints: 1\n \n - predict(audio, api_name=\"/predict\") -> output\n Parameters:\n - [Audio] audio: filepath (required) \n Returns:\n - [Textbox] output: str \n\nWe see that we have 1 API endpoint in this space, and shows us how to use the\nAPI endpoint to make a prediction: we should call the `.predict()` method\n(which we will explore below), providing a parameter `input_audio` of type\n`str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()`\nmethod. Although this isn\u2019t necessary if a Gradio app has only 1 named\nendpoint, it does allow us to call different endpoints in a single app if they\nare available.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the\n\u201cUse via API\u201d link in the footer of the Gradio app, which shows us the same\ninformation, along with example usage.\n\n![](https://huggingface.co/datasets/huggingface/documentation-\nimages/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \u201cAPI Recorder\u201d that lets you interact with\nthe Gradio UI normally and converts your interactions into the corresponding\ncode to run with the Python Client.\n\n", "heading1": "The \u201cView API\u201d Page", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()`\nfunction with the appropriate arguments:\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/en2fr\", api_name='/predict')\n client.predict(\"Hello\")\n \n >> Bonjour\n\nIf there are multiple parameters, then you should pass them as separate\narguments to `.predict()`, like this:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/calculator\")\n client.predict(4, \"add\", 5)\n \n >> 9.0\n\nIt is recommended to provide key-word arguments instead of positional\narguments:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/calculator\")\n client.predict(num1=4, operation=\"add\", num2=5)\n \n >> 9.0\n\nThis allows you to take advantage of default arguments. For example, this\nSpace includes the default value for the Slider component so you do not need\nto provide it when accessing it with the client.\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/image_generator\")\n client.predict(text=\"an astronaut riding a camel\")\n\nThe default value is the initial value of the corresponding Gradio component.\nIf the component does not have an initial value, but if the corresponding\nargument in the predict function has a default value of `None`, then that\nparameter is also optional in the client. Of course, if you\u2019d like to override\nit, you can include it as well:\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/image_generator\")\n client.predict(text=\"an astronaut riding a camel\", steps=25)\n\nFor providing files or URLs as inputs, you should pass in the filepath or URL\nto the file enclosed within `gradio_client.file()`. This takes care of\nuploading the file to the Gradio server and ensures that the file is\npreprocessed correctly:\n\n \n \n from gradio_client import Client, file\n \n client = Client(\"abidlabs/whisper\")\n client.predict(\n ", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": " to the Gradio server and ensures that the file is\npreprocessed correctly:\n\n \n \n from gradio_client import Client, file\n \n client = Client(\"abidlabs/whisper\")\n client.predict(\n audio=file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n )\n \n >> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Oe should note that `.predict()` is a _blocking_ operation as it waits for the\noperation to complete before returning the prediction.\n\nIn many cases, you may be better off letting the job run in the background\nuntil you need the results of the prediction. You can do this by creating a\n`Job` instance using the `.submit()` method, and then later calling\n`.result()` on the job to get the result. For example:\n\n \n \n from gradio_client import Client\n \n client = Client(space=\"abidlabs/en2fr\")\n job = client.submit(\"Hello\", api_name=\"/predict\") This is not blocking\n \n Do something else\n \n job.result() This is blocking\n \n >> Bonjour\n\n", "heading1": "Running jobs asynchronously", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Alternatively, one can add one or more callbacks to perform actions after the\njob has completed running, like this:\n\n \n \n from gradio_client import Client\n \n def print_result(x):\n print(\"The translated result is: {x}\")\n \n client = Client(space=\"abidlabs/en2fr\")\n \n job = client.submit(\"Hello\", api_name=\"/predict\", result_callbacks=[print_result])\n \n Do something else\n \n >> The translated result is: Bonjour\n \n\n", "heading1": "Adding callbacks", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The `Job` object also allows you to get the status of the running job by\ncalling the `.status()` method. This returns a `StatusUpdate` object with the\nfollowing attributes: `code` (the status code, one of a set of defined strings\nrepresenting the status. See the `utils.Status` class), `rank` (the current\nposition of this job in the queue), `queue_size` (the total queue size), `eta`\n(estimated time this job will complete), `success` (a boolean representing\nwhether the job completed successfully), and `time` (the time that the status\nwas generated).\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/calculator\")\n job = client.submit(5, \"add\", 4, api_name=\"/predict\")\n job.status()\n \n >> \n\n_Note_ : The `Job` class also has a `.done()` instance method which returns a\nboolean indicating whether the job has completed.\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The `Job` class also has a `.cancel()` instance method that cancels jobs that\nhave been queued but not started. For example, if you run:\n\n \n \n client = Client(\"abidlabs/whisper\")\n job1 = client.submit(file(\"audio_sample1.wav\"))\n job2 = client.submit(file(\"audio_sample2.wav\"))\n job1.cancel() will return False, assuming the job has started\n job2.cancel() will return True, indicating that the job has been canceled\n\nIf the first job has started processing, then it will not be canceled. If the\nsecond job has not yet started, it will be successfully canceled and removed\nfrom the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a\nseries of values. You can get the series of values that have been returned at\nany time from such a generator endpoint by running `job.outputs()`:\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/count_generator\")\n job = client.submit(3, api_name=\"/count\")\n while not job.done():\n time.sleep(0.1)\n job.outputs()\n \n >> ['0', '1', '2']\n\nNote that running `job.result()` on a generator endpoint only gives you the\n_first_ value returned by the endpoint.\n\nThe `Job` object is also iterable, which means you can use it to display the\nresults of a generator function as they are returned from the endpoint. Here\u2019s\nthe equivalent example using the `Job` as a generator:\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/count_generator\")\n job = client.submit(3, api_name=\"/count\")\n \n for o in job:\n print(o)\n \n >> 0\n >> 1\n >> 2\n\nYou can also cancel jobs that that have iterative outputs, in which case the\njob will finish as soon as the current iteration finishes running.\n\n \n \n from gradio_client import Client\n import time\n \n client = Client(\"abidlabs/test-yield\")\n job = client.submit(\"abcdef\")\n time.sleep(3)\n job.cancel() job cancels after 2 iterations\n\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Gradio demos can include [session state](https://www.gradio.app/guides/state-\nin-blocks), which provides a way for demos to persist information from user\ninteractions within a page session.\n\nFor example, consider the following demo, which maintains a list of words that\na user has submitted in a `gr.State` component. When a user submits a new\nword, it is added to the state, and the number of previous occurrences of that\nword is displayed:\n\n \n \n import gradio as gr\n \n def count(word, list_of_words):\n return list_of_words.count(word), list_of_words + [word]\n \n with gr.Blocks() as demo:\n words = gr.State([])\n textbox = gr.Textbox()\n number = gr.Number()\n textbox.submit(count, inputs=[textbox, words], outputs=[number, words])\n \n demo.launch()\n\nIf you were to connect this this Gradio app using the Python Client, you would\nnotice that the API information only shows a single input and output:\n\n \n \n Client.predict() Usage Info\n ---------------------------\n Named API endpoints: 1\n \n - predict(word, api_name=\"/count\") -> value_31\n Parameters:\n - [Textbox] word: str (required) \n Returns:\n - [Number] value_31: float \n\nThat is because the Python client handles state automatically for you \u2014 as you\nmake a series of requests, the returned state from one request is stored\ninternally and automatically supplied for the subsequent request. If you\u2019d\nlike to reset the state, you can do that by calling `Client.reset_session()`.\n\n", "heading1": "Demos with Session State", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The main Client class for the Python client. This class is used to connect\nto a remote Gradio app and call its API endpoints. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "from gradio_client import Client\n \n client = Client(\"abidlabs/whisper-large-v2\") connecting to a Hugging Face Space\n client.predict(\"test.mp4\", api_name=\"/predict\")\n >> What a nice recording! returns the result of the remote API call\n \n client = Client(\"https://bec81a83-5b5c-471e.gradio.live\") connecting to a temporary Gradio share URL\n job = client.submit(\"hello\", api_name=\"/predict\") runs the prediction in a background thread\n job.result()\n >> 49 returns the result of the remote API call (blocking call)\n\n", "heading1": "Example usage", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n src: str\n\neither the name of the Hugging Face Space to load, (e.g. \"abidlabs/whisper-\nlarge-v2\") or the full URL (including \"http\" or \"https\") of the hosted Gradio\napp to load (e.g. \"http://mydomain.com/app\" or\n\"https://bec81a83-5b5c-471e.gradio.live/\").\n\n\n \n \n token: str | None\n\ndefault `= None`\n\noptional Hugging Face token to use to access private Spaces. By default, the\nlocally saved token is used if there is one. Find your tokens here:\nhttps://huggingface.co/settings/tokens.\n\n\n \n \n max_workers: int\n\ndefault `= 40`\n\nmaximum number of thread workers that can be used to make requests to the\nremote Gradio app simultaneously.\n\n\n \n \n verbose: bool\n\ndefault `= True`\n\nwhether the client should print statements to the console.\n\n\n \n \n auth: tuple[str, str] | None\n\ndefault `= None`\n\n\n \n \n httpx_kwargs: dict[str, Any] | None\n\ndefault `= None`\n\nadditional keyword arguments to pass to `httpx.Client`, `httpx.stream`,\n`httpx.get` and `httpx.post`. This can be used to set timeouts, proxies, http\nauth, etc.\n\n\n \n \n headers: dict[str, str] | None\n\ndefault `= None`\n\nadditional headers to send to the remote Gradio app on every request. By\ndefault only the HF authorization and user-agent headers are sent. This\nparameter will override the default headers if they have the same keys.\n\n\n \n \n download_files: str | Path | Literal[False]\n\ndefault `= \"/tmp/gradio\"`\n\ndirectory where the client should download output files on the local machine\nfrom the remote API. By default, uses the value of the GRADIO_TEMP_DIR\nenvironment variable which, if not set by the user, is a temporary directory\non your machine. If False, the client does not download files and returns a\nFileData dataclass object with the filepath on the remote machine instead.\n\n\n \n \n ssl_verify: bool\n\ndefault `= True`\n\nif False, skips certificate validation which allows the client to connect to\nGradio apps that are using self-signed ce", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "n the remote machine instead.\n\n\n \n \n ssl_verify: bool\n\ndefault `= True`\n\nif False, skips certificate validation which allows the client to connect to\nGradio apps that are using self-signed certificates.\n\n\n \n \n analytics_enabled: bool\n\ndefault `= True`\n\nWhether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED\nenvironment variable or default to True.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Client component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Client.predict(fn, \u00b7\u00b7\u00b7)`| Calls the Gradio API and returns the result (this\nis a blocking call). Arguments can be provided as positional arguments or as\nkeyword arguments (latter is recommended).
\n`Client.submit(fn, \u00b7\u00b7\u00b7)`| Creates and returns a Job object which calls the\nGradio API in a background thread. The job can be used to retrieve the status\nand result of the remote API call. Arguments can be provided as positional\narguments or as keyword arguments (latter is recommended).
\n`Client.view_api(fn, \u00b7\u00b7\u00b7)`| Prints the usage info for the API. If the Gradio\napp has multiple API endpoints, the usage info for each endpoint will be\nprinted separately. If return_format=\"dict\" the info is returned in dictionary\nformat, as shown in the example below.
\n`Client.duplicate(fn, \u00b7\u00b7\u00b7)`| Duplicates a Hugging Face Space under your\naccount and returns a Client object for the new Space. No duplication is\ncreated if the Space already exists in your account (to override this, provide\na new name for the new Space using `to_id`). To use this method, you must\nprovide an `token` or be logged in via the Hugging Face Hub CLI.
The new\nSpace will be private by default and use the same hardware as the original\nSpace. This can be changed by using the `private` and `hardware` parameters.\nFor hardware upgrades (beyond the basic CPU tier), you may be required to\nprovide billing information on Hugging Face:\n
\n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "grades (beyond the basic CPU tier), you may be required to\nprovide billing information on Hugging Face:\n
\n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n args: \n\nThe positional arguments to pass to the remote API endpoint. The order of the\narguments must match the order of the inputs in the Gradio app.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\nThe name of the API endpoint to call starting with a leading slash, e.g.\n\"/predict\". Does not need to be provided if the Gradio app has only one named\nAPI endpoint.\n\n\n \n \n fn_index: int | None\n\ndefault `= None`\n\nAs an alternative to api_name, this parameter takes the index of the API\nendpoint to call, e.g. 0. Both api_name and fn_index can be provided, but if\nthey conflict, api_name will take precedence.\n\n\n \n \n headers: dict[str, str] | None\n\ndefault `= None`\n\nAdditional headers to send to the remote Gradio app on this request. This\nparameter will overrides the headers provided in the Client constructor if\nthey have the same keys.\n\n\n \n \n kwargs: \n\nThe keyword arguments to pass to the remote API endpoint.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "A Job is a wrapper over the Future class that represents a prediction call\nthat has been submitted by the Gradio client. This class is not meant to be\ninstantiated directly, but rather is created by the Client.submit() method. \nA Job object includes methods to get the status of the prediction call, as\nwell to get the outputs of the prediction call. Job objects are also iterable,\nand can be used in a loop to get the outputs of prediction calls as they\nbecome available for generator endpoints.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n future: Future\n\nThe future object that represents the prediction call, created by the\nClient.submit() method\n\n\n \n \n communicator: Communicator | None\n\ndefault `= None`\n\nThe communicator object that is used to communicate between the client and the\nbackground thread running the job\n\n\n \n \n verbose: bool\n\ndefault `= True`\n\nWhether to print any status-related messages to the console\n\n\n \n \n space_id: str | None\n\ndefault `= None`\n\nThe space ID corresponding to the Client object that created this Job object\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Job component supports the following event listeners. Each event listener\ntakes the same parameters, which are listed in the Event Parameters table\nbelow.\n\nListener| Description \n---|--- \n`Job.result(fn, \u00b7\u00b7\u00b7)`| Return the result of the call that the future\nrepresents. Raises CancelledError: If the future was cancelled, TimeoutError:\nIf the future didn't finish executing before the given timeout, and Exception:\nIf the call raised then that exception will be raised.
\n`Job.outputs(fn, \u00b7\u00b7\u00b7)`| Returns a list containing the latest outputs from the\nJob.
If the endpoint has multiple output components, the list will\ncontain a tuple of results. Otherwise, it will contain the results without\nstoring them in tuples.
For endpoints that are queued, this list will\ncontain the final job output even if that endpoint does not use a generator\nfunction.
\n`Job.status(fn, \u00b7\u00b7\u00b7)`| Returns the latest status update from the Job in the\nform of a StatusUpdate object, which contains the following fields: code,\nrank, queue_size, success, time, eta, and progress_data.
progress_data is\na list of updates emitted by the gr.Progress() tracker of the event handler.\nEach element of the list has the following fields: index, length, unit,\nprogress, desc. If the event handler does not have a gr.Progress() tracker,\nthe progress_data field will be None.
\n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n timeout: float | None\n\ndefault `= None`\n\nThe number of seconds to wait for the result if the future isn't done. If\nNone, then there is no limit on the wait time.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "**Stream From a Gradio app in 5 lines**\n\n \n\nUse the `submit` method to get a job you can iterate over.\n\n \n\nIn python:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/llm_stream\")\n \n for result in client.submit(\"What's the best UI framework in Python?\"):\n print(result)\n\n \n\nIn typescript:\n\n \n \n import { Client } from \"@gradio/client\";\n \n const client = await Client.connect(\"gradio/llm_stream\")\n const job = client.submit(\"/predict\", {\"text\": \"What's the best UI framework in Python?\"})\n \n for await (const msg of job) console.log(msg.data)\n\n \n\n**Use the same keyword arguments as the app**\n\n \nIn the examples below, the upstream app has a function with parameters called\n`message`, `system_prompt`, and `tokens`. We can see that the client `predict`\ncall uses the same arguments.\n\nIn python:\n\n \n \n from gradio_client import Client\n \n client = Client(\"http://127.0.0.1:7860/\")\n result = client.predict(\n \t\tmessage=\"Hello!!\",\n \t\tsystem_prompt=\"You are helpful AI.\",\n \t\ttokens=10,\n \t\tapi_name=\"/chat\"\n )\n print(result)\n\nIn typescript:\n\n \n \n import { Client } from \"@gradio/client\";\n \n const client = await Client.connect(\"http://127.0.0.1:7860/\");\n const result = await client.predict(\"/chat\", { \t\t\n \t\tmessage: \"Hello!!\", \t\t\n \t\tsystem_prompt: \"Hello!!\", \t\t\n \t\ttokens: 10, \n });\n \n console.log(result.data);\n\n \n\n**Better Error Messages**\n\n \nIf something goes wrong in the upstream app, the client will raise the same\nexception as the app provided that `show_error=True` in the original app's\n`launch()` function, or it's a `gr.Error` exception.\n\n", "heading1": "Ergonomic API \ud83d\udc86", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "Anything you can do in the UI, you can do with the client:\n\n * \ud83d\udd10Authentication\n * \ud83d\uded1 Job Cancelling\n * \u2139\ufe0f Access Queue Position and API\n * \ud83d\udcd5 View the API information\n\n \nHere's an example showing how to display the queue position of a pending job:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/diffusion_model\")\n \n job = client.submit(\"A cute cat\")\n while not job.done():\n status = job.status()\n print(f\"Current in position {status.rank} out of {status.queue_size}\")\n\n", "heading1": "Transparent Design \ud83e\ude9f", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "The client can run from pretty much any python and javascript environment\n(node, deno, the browser, Service Workers). \nHere's an example using the client from a Flask server using gevent:\n\n \n \n from gevent import monkey\n monkey.patch_all()\n \n from gradio_client import Client\n from flask import Flask, send_file\n import time\n \n app = Flask(__name__)\n \n imageclient = Client(\"gradio/diffusion_model\")\n \n @app.route(\"/gen\")\n def gen():\n result = imageclient.predict(\n \"A cute cat\",\n api_name=\"/predict\"\n )\n return send_file(result)\n \n if __name__ == \"__main__\":\n app.run(host=\"0.0.0.0\", port=5000)\n\n", "heading1": "Portable Design \u26fa\ufe0f", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "Changes\n\n \n\n**Python**\n\n * The `serialize` argument of the `Client` class was removed and has no effect.\n * The `upload_files` argument of the `Client` was removed.\n * All filepaths must be wrapped in the `handle_file` method. For example, `caption = client.predict(handle_file('./dog.jpg'))`.\n * The `output_dir` argument was removed. It is not specified in the `download_files` argument.\n\n \n\n**Javascript**\n\n \nThe client has been redesigned entirely. It was refactored from a function\ninto a class. An instance can now be constructed by awaiting the `connect`\nmethod.\n\n \n \n const app = await Client.connect(\"gradio/whisper\")\n\nThe app variable has the same methods as the python class (`submit`,\n`predict`, `view_api`, `duplicate`).\n\n", "heading1": "v1.0 Migration Guide and Breaking", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "ZeroGPU\n\nZeroGPU spaces are rate-limited to ensure that a single user does not hog all\nof the available GPUs. The limit is controlled by a special token that the\nHugging Face Hub infrastructure adds to all incoming requests to Spaces. This\ntoken is a request header called `X-IP-Token` and its value changes depending\non the user who makes a request to the ZeroGPU space.\n\n \n\nLet\u2019s say you want to create a space (Space A) that uses a ZeroGPU space\n(Space B) programmatically. Normally, calling Space B from Space A with the\nGradio Python client would quickly exhaust Space B\u2019s rate limit, as all the\nrequests to the ZeroGPU space would be missing the `X-IP-Token` request header\nand would therefore be treated as unauthenticated.\n\nIn order to avoid this, we need to extract the `X-IP-Token` of the user using\nSpace A before we call Space B programmatically. Where possible, specifically\nin the case of functions that are passed into event listeners directly, Gradio\nautomatically extracts the `X-IP-Token` from the incoming request and passes\nit into the Gradio Client. But if the Client is instantiated outside of such a\nfunction, then you may need to pass in the token manually.\n\nHow to do this will be explained in the following section.\n\n", "heading1": "Explaining Rate Limits for", "source_page_url": "https://gradio.app/docs/python-client/using-zero-gpu-spaces", "source_page_title": "Python Client - Using Zero Gpu Spaces Docs"}, {"text": "Token\n\nIn the following hypothetical example, when a user presses enter in the\ntextbox, the `generate()` function is called, which calls a second function,\n`text_to_image()`. Because the Gradio Client is being instantiated indirectly,\nin `text_to_image()`, we will need to extract their token from the `X-IP-\nToken` header of the incoming request. We will use this header when\nconstructing the gradio client.\n\n \n \n import gradio as gr\n from gradio_client import Client\n \n def text_to_image(prompt, request: gr.Request):\n x_ip_token = request.headers['x-ip-token']\n client = Client(\"hysts/SDXL\", headers={\"x-ip-token\": x_ip_token})\n img = client.predict(prompt, api_name=\"/predict\")\n return img\n \n def generate(prompt, request: gr.Request):\n prompt = prompt[:300]\n return text_to_image(prompt, request)\n \n with gr.Blocks() as demo:\n image = gr.Image()\n prompt = gr.Textbox(max_lines=1)\n prompt.submit(generate, [prompt], [image])\n \n demo.launch()\n\n", "heading1": "Avoiding Rate Limits by Manually Passing an IP", "source_page_url": "https://gradio.app/docs/python-client/using-zero-gpu-spaces", "source_page_title": "Python Client - Using Zero Gpu Spaces Docs"}, {"text": "`gradio-rs` is a Gradio Client in Rust built by\n[@JacobLinCool](https://github.com/JacobLinCool). You can find the repo\n[here](https://github.com/JacobLinCool/gradio-rs), and more in depth API\ndocumentation [here](https://docs.rs/gradio/latest/gradio/).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "Here is an example of using BS-RoFormer model to separate vocals and\nbackground music from an audio file.\n\n \n \n use gradio::{PredictionInput, Client, ClientOptions};\n \n [tokio::main]\n async fn main() {\n if std::env::args().len() < 2 {\n println!(\"Please provide an audio file path as an argument\");\n std::process::exit(1);\n }\n let args: Vec = std::env::args().collect();\n let file_path = &args[1];\n println!(\"File: {}\", file_path);\n \n let client = Client::new(\"JacobLinCool/vocal-separation\", ClientOptions::default())\n .await\n .unwrap();\n \n let output = client\n .predict(\n \"/separate\",\n vec![\n PredictionInput::from_file(file_path),\n PredictionInput::from_value(\"BS-RoFormer\"),\n ],\n )\n .await\n .unwrap();\n println!(\n \"Vocals: {}\",\n output[0].clone().as_file().unwrap().url.unwrap()\n );\n println!(\n \"Background: {}\",\n output[1].clone().as_file().unwrap().url.unwrap()\n );\n }\n\nYou can find more examples [here](https://github.com/JacobLinCool/gradio-\nrs/tree/main/examples).\n\n", "heading1": "Usage", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "cargo install gradio\n gr --help\n\nTake [stabilityai/stable-\ndiffusion-3-medium](https://huggingface.co/spaces/stabilityai/stable-\ndiffusion-3-medium) HF Space as an example:\n\n \n \n > gr list stabilityai/stable-diffusion-3-medium\n API Spec for stabilityai/stable-diffusion-3-medium:\n /infer\n Parameters:\n prompt ( str ) \n negative_prompt ( str ) \n seed ( float ) numeric value between 0 and 2147483647\n randomize_seed ( bool ) \n width ( float ) numeric value between 256 and 1344\n height ( float ) numeric value between 256 and 1344\n guidance_scale ( float ) numeric value between 0.0 and 10.0\n num_inference_steps ( float ) numeric value between 1 and 50\n Returns:\n Result ( filepath ) \n Seed ( float ) numeric value between 0 and 2147483647\n \n > gr run stabilityai/stable-diffusion-3-medium infer 'Rusty text \"AI & CLI\" on the snow.' '' 0 true 1024 1024 5 28\n Result: https://stabilityai-stable-diffusion-3-medium.hf.space/file=/tmp/gradio/5735ca7775e05f8d56d929d8f57b099a675c0a01/image.webp\n Seed: 486085626\n\nFor file input, simply use the file path as the argument:\n\n \n \n gr run hf-audio/whisper-large-v3 predict 'test-audio.wav' 'transcribe'\n output: \" Did you know you can try the coolest model on your command line?\"\n\n", "heading1": "Command Line Interface", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "Gradio applications support programmatic requests from many environments:\n\n * The [Python Client](/docs/python-client): `gradio-client` allows you to make requests from Python environments.\n * The [JavaScript Client](/docs/js-client): `@gradio/client` allows you to make requests in TypeScript from the browser or server-side.\n * You can also query gradio apps [directly from cURL](/guides/querying-gradio-apps-with-curl).\n\n", "heading1": "Gradio Clients", "source_page_url": "https://gradio.app/docs/third-party-clients/introduction", "source_page_title": "Third Party Clients - Introduction Docs"}, {"text": "We also encourage the development and use of third party clients built by\nthe community:\n\n * [Rust Client](/docs/third-party-clients/rust-client): `gradio-rs` built by [@JacobLinCool](https://github.com/JacobLinCool) allows you to make requests in Rust.\n * [Powershell Client](https://github.com/rrg92/powershai): `powershai` built by [@rrg92](https://github.com/rrg92) allows you to make requests to Gradio apps directly from Powershell. See [here for documentation](https://github.com/rrg92/powershai/blob/main/docs/en-US/providers/HUGGING-FACE.md)\n\n", "heading1": "Community Clients", "source_page_url": "https://gradio.app/docs/third-party-clients/introduction", "source_page_title": "Third Party Clients - Introduction Docs"}, {"text": "Creates a set of (string or numeric type) radio buttons of which only one\ncan be selected. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "**As input component** : Passes the value of the selected radio button as a `str | int | float`, or its index as an `int` into the function, depending on `type`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | int | float | None\n )\n \t...\n\n \n\n**As output component** : Expects a `str | int | float` corresponding to the value of the radio button to be selected\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | int | float | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n choices: list[str | int | float | tuple[str, str | int | float]] | None\n\ndefault `= None`\n\nA list of string or numeric options to select from. An option can also be a\ntuple of the form (name, value), where name is the displayed name of the radio\nbutton and value is the value to be passed to the function, or returned by the\nfunction.\n\n\n \n \n value: str | int | float | Callable | None\n\ndefault `= None`\n\nThe option selected by default. If None, no option is selected by default. If\na function is provided, the function will be called each time the app loads to\nset the initial value of this component.\n\n\n \n \n type: Literal['value', 'index']\n\ndefault `= \"value\"`\n\nType of value to be returned by component. \"value\" returns the string of the\nchoice selected, \"index\" returns the index of the choice selected.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component, displayed above the component if `show_label` is\n`True` and is also used as the header if there are a table of examples for\nthis component. If None and used in a `gr.Interface`, the label will be the\nname of the parameter this component corresponds to.\n\n\n \n \n info: str | I18nData | None\n\ndefault `= None`\n\nadditional component description, appears below the label in smaller font.\nSupports markdown / HTML syntax.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nRelative width compared to adjacent Components in a Row. For example, if\nComponent A has scale=2, and Component B has scale=1, A will be twice as wide\nas B. Should be an integer.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nMinimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nIf True, choices in this radio group will be selectable; if False, selection\nwill be disabled. If not provided, this is inferred based on whether the\ncomponent is used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Compon", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n rtl: bool\n\ndefault `= False`\n\nIf True, the radio buttons will be displayed in right-to-left order. Default\nis False.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Radio`| \"radio\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "sentence_builderblocks_essay\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Radio component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Radio.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or deselects\nthe Radio. Uses event data gradio.SelectData to carry `value` referring to the\nlabel of the Radio, and `selected` to refer to state of the Radio. See\n for more details. \n`Radio.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Radio changes either\nbecause of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Radio.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes the\nvalue of the Radio. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "puts. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= F", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "e has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "ple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provi", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": " an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Creates a \"Sign In\" button that redirects the user to sign in with Hugging\nFace OAuth. Once the user is signed in, the button will act as a logout\nbutton, and you can retrieve a signed-in user's profile by adding a parameter\nof type `gr.OAuthProfile` to any Gradio function. This will only work if this\nGradio app is running in a Hugging Face Space. Permissions for the OAuth app\ncan be configured in the Spaces README file, as described here:\n For local development,\ninstead of OAuth, the local Hugging Face account that is logged in (via `hf\nauth login`) will be available through the `gr.OAuthProfile` object. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "**As input component** : (Rarely used) the `str` corresponding to the\nbutton label when the button is clicked\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | None\n )\n \t...\n\n \n\n**As output component** : string corresponding to the button label\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: str\n\ndefault `= \"Sign in with Hugging Face\"`\n\n\n \n \n logout_value: str\n\ndefault `= \"Logout ({})\"`\n\nThe text to display when the user is signed in. The string should contain a\nplaceholder for the username with a call-to-action to logout, e.g. \"Logout\n({})\".\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\n\n \n \n variant: Literal['primary', 'secondary', 'stop', 'huggingface']\n\ndefault `= \"huggingface\"`\n\n\n \n \n size: Literal['sm', 'md', 'lg']\n\ndefault `= \"lg\"`\n\n\n \n \n icon: str | Path | None\n\ndefault `= \"/home/runner/work/gradio/gradio/gradio/icons/huggingface-\nlogo.svg\"`\n\n\n \n \n link: str | None\n\ndefault `= None`\n\n\n \n \n link_target: Literal['_self', '_blank', '_parent', '_top']\n\ndefault `= \"_self\"`\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\n\n \n \n render: bool\n\ndefault `= True`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\n\n \n \n min_width: int | None\n\ndefault `= None`\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.LoginButton`| \"loginbutton\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "login_with_huggingface\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe LoginButton component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`LoginButton.click(fn, \u00b7\u00b7\u00b7)`| Triggered when the Button is clicked. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs wi", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": " api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preproces", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "lt `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter i", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "one to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Component to select a date and (optionally) a time.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "**As input component** : Passes text value as a `str` into the function.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: float | datetime | str | None\n )\n \t...\n\n \n\n**As output component** : Expects a tuple pair of datetimes.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> float | datetime | str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: float | str | datetime | None\n\ndefault `= None`\n\ndefault value for datetime.\n\n\n \n \n include_time: bool\n\ndefault `= True`\n\nIf True, the component will include time selection. If False, only date\nselection will be available.\n\n\n \n \n type: Literal['timestamp', 'datetime', 'string']\n\ndefault `= \"timestamp\"`\n\nThe type of the value. Can be \"timestamp\", \"datetime\", or \"string\". If\n\"timestamp\", the value will be a number representing the start and end date in\nseconds since epoch. If \"datetime\", the value will be a datetime object. If\n\"string\", the value will be the date entered by the user.\n\n\n \n \n timezone: str | None\n\ndefault `= None`\n\nThe timezone to use for timestamps, such as \"US/Pacific\" or \"Europe/Paris\". If\nNone, the timezone will be the local timezone.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component, displayed above the component if `show_label` is\n`True` and is also used as the header if there are a table of examples for\nthis component. If None and used in a `gr.Interface`, the label will be the\nname of the parameter this component corresponds to.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n info: str | I18nData | None\n\ndefault `= None`\n\nadditional component description, appears below the label in smaller font.\nSupports markdown / HTML syntax.\n\n\n \n \n every: float | None\n\ndefault `= None`\n\nIf `value` is a callable, run the function 'every' number of seconds while the\nclient connection is open. Has no effect otherwise. The event can be accessed\n(e.g. to cancel it) via this component's .load_event attribute.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin B", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "nents. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "y have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.DateTime`| \"datetime\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe DateTime component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`DateTime.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the DateTime changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`DateTime.submit(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user presses\nthe Enter key while the DateTime is focused. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a str", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "ion returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": " meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_l", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "rontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "y if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/datetime", "source_page_title": "Gradio - Datetime Docs"}, {"text": "The gr.DownloadData class is a subclass of gr.EventData that specifically\ncarries information about the `.download()` event. When gr.DownloadData is\nadded as a type hint to an argument of an event listener method, a\ngr.DownloadData object will automatically be passed as the value of that\nargument. The attributes of this object contains information about the event\nthat triggered the listener.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/downloaddata", "source_page_title": "Gradio - Downloaddata Docs"}, {"text": "import gradio as gr\n def on_download(download_data: gr.DownloadData):\n return f\"Downloaded file: {download_data.file.path}\"\n with gr.Blocks() as demo:\n files = gr.File()\n textbox = gr.Textbox()\n files.download(on_download, None, textbox)\n demo.launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/downloaddata", "source_page_title": "Gradio - Downloaddata Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n file: FileData\n\nThe file that was downloaded, as a FileData object.\n\n", "heading1": "Attributes", "source_page_url": "https://gradio.app/docs/gradio/downloaddata", "source_page_title": "Gradio - Downloaddata Docs"}, {"text": "Creates a chatbot that displays user-submitted messages and responses.\nSupports a subset of Markdown including bold, italics, code, tables. Also\nsupports audio/video/image files, which are displayed in the Chatbot, and\nother kinds of files which are displayed as links. This component is usually\nused as an output component. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "The Chatbot component accepts a list of messages, where each message is a\ndictionary with `role` and `content` keys. This format is compatible with the\nmessage format expected by most LLM APIs (OpenAI, Claude, HuggingChat, etc.),\nmaking it easy to pipe model outputs directly into the component.\n\nThe `role` key should be either `'user'` or `'assistant'`, and the `content`\nkey can be a string (rendered as markdown/HTML) or a Gradio component (useful\nfor displaying files, images, plots, and other media).\n\nAs an example:\n\n \n \n import gradio as gr\n \n history = [\n {\"role\": \"assistant\", \"content\": \"I am happy to provide you that report and plot.\"},\n {\"role\": \"assistant\", \"content\": gr.Plot(value=make_plot_from_file('quaterly_sales.txt'))}\n ]\n \n with gr.Blocks() as demo:\n gr.Chatbot(history)\n \n demo.launch()\n\nFor convenience, you can use the `ChatMessage` dataclass so that your text\neditor can give you autocomplete hints and typechecks.\n\n \n \n import gradio as gr\n \n history = [\n gr.ChatMessage(role=\"assistant\", content=\"How can I help you?\"),\n gr.ChatMessage(role=\"user\", content=\"Can you make me a plot of quarterly sales?\"),\n gr.ChatMessage(role=\"assistant\", content=\"I am happy to provide you that report and plot.\")\n ]\n \n with gr.Blocks() as demo:\n gr.Chatbot(history)\n \n demo.launch()\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: list[MessageDict | Message] | Callable | None\n\ndefault `= None`\n\nDefault list of messages to show in chatbot, where each message is of the\nformat {\"role\": \"user\", \"content\": \"Help me.\"}. Role can be one of \"user\",\n\"assistant\", or \"system\". Content should be either text, or media passed as a\nGradio component, e.g. {\"content\": gr.Image(\"lion.jpg\")}. If a function is\nprovided, the function will be called each time the app loads to set the\ninitial value of this component.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "ide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n autoscroll: bool\n\ndefault `= True`\n\nIf True, will automatically scroll to the bottom of the textbox when the value\nchanges, unless the user scrolls up. If False, will not scroll to the bottom\nof the textbox when the value changes.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they hav", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n height: int | str | None\n\ndefault `= 400`\n\nThe height of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. If messages exceed the height, the component\nwill scroll.\n\n\n \n \n resizable: bool\n\ndefault `= False`\n\nIf True, the user of the Gradio app can resize the chatbot by dragging the\nbottom right corner.\n\n\n \n \n max_height: int | str | None\n\ndefault `= None`\n\nThe maximum height of the component, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If messages exceed the height,\nthe component will scroll. If messages are shorter than the height, the\ncomponent will shrink to fit the content. Will not have any effect if `height`\nis set and is smaller than `max_height`.\n\n\n \n \n min_height: int | str | None\n\ndefault `= None`\n\nThe minimum height of the component, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If messages exceed the height,\nthe component will expand to fit the content. Will not have any effect if\n`height` is set and is larger than `min_height`.\n\n\n \n \n editable: Literal['user', 'all'] | None\n\ndefault `= None`\n\nAllows user to edit messages in the chatbot. If set to \"user\", allows editing\nof user messages. If set to \"all\", allows editing of assistant messages as\nwell.\n\n\n \n \n latex_delimiters: list[dict[str, str | bool]] | None\n\ndefault `= None`\n\nA list of dicts of the form {\"left\": open delimiter (str), \"right\": close\ndelimiter (str), \"display\": whether to display in newline (bool)} that will be\nused to render LaTeX expressions. If not provided, `latex_delimiters` is set\nto `[{ \"", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "pen delimiter (str), \"right\": close\ndelimiter (str), \"display\": whether to display in newline (bool)} that will be\nused to render LaTeX expressions. If not provided, `latex_delimiters` is set\nto `[{ \"left\": \"$$\", \"right\": \"$$\", \"display\": True }]`, so only expressions\nenclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass\nin an empty list to disable LaTeX rendering. For more information, see the\n[KaTeX documentation](https://katex.org/docs/autorender.html).\n\n\n \n \n rtl: bool\n\ndefault `= False`\n\nIf True, sets the direction of the rendered text to right-to-left. Default is\nFalse, which renders text left-to-right.\n\n\n \n \n buttons: list[Literal['share', 'copy', 'copy_all'] | Button] | None\n\ndefault `= None`\n\nA list of buttons to show in the top right corner of the component. Valid\noptions are \"share\", \"copy\", \"copy_all\", or a gr.Button() instance. The\n\"share\" button allows the user to share outputs to Hugging Face Spaces\nDiscussions. The \"copy\" button makes a copy button appear next to each\nindividual chatbot message. The \"copy_all\" button appears at the component\nlevel and allows the user to copy all chatbot messages. Custom gr.Button()\ninstances will appear in the toolbar with their configured icon and/or label,\nand clicking them will trigger any .click() events registered on the button.\nBy default, \"share\" and \"copy_all\" buttons are shown.\n\n\n \n \n watermark: str | None\n\ndefault `= None`\n\nIf provided, this text will be appended to the end of messages copied from the\nchatbot, after a blank line. Useful for indicating that the message is\ngenerated by an AI model.\n\n\n \n \n avatar_images: tuple[str | Path | None, str | Path | None] | None\n\ndefault `= None`\n\nTuple of two avatar image paths or URLs for user and bot (in that order). Pass\nNone for either the user or bot image to skip. Must be within the working\ndirectory of the Gradio app or an external URL.\n\n\n \n \n sanitize_html: bool\n\ndefault `= True`\n\nIf False, w", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "rder). Pass\nNone for either the user or bot image to skip. Must be within the working\ndirectory of the Gradio app or an external URL.\n\n\n \n \n sanitize_html: bool\n\ndefault `= True`\n\nIf False, will disable HTML sanitization for chatbot messages. This is not\nrecommended, as it can lead to security vulnerabilities.\n\n\n \n \n render_markdown: bool\n\ndefault `= True`\n\nIf False, will disable Markdown rendering for chatbot messages.\n\n\n \n \n feedback_options: list[str] | tuple[str, ...] | None\n\ndefault `= ('Like', 'Dislike')`\n\nA list of strings representing the feedback options that will be displayed to\nthe user. The exact case-sensitive strings \"Like\" and \"Dislike\" will render as\nthumb icons, but any other choices will appear under a separate flag icon.\n\n\n \n \n feedback_value: list[str | None] | None\n\ndefault `= None`\n\nA list of strings representing the feedback state for entire chat. Only works\nwhen type=\"messages\". Each entry in the list corresponds to that assistant\nmessage, in order, and the value is the feedback given (e.g. \"Like\",\n\"Dislike\", or any custom feedback option) or None if no feedback was given for\nthat message.\n\n\n \n \n line_breaks: bool\n\ndefault `= True`\n\nIf True (default), will enable Github-flavored Markdown line breaks in chatbot\nmessages. If False, single new lines will be ignored. Only applies if\n`render_markdown` is True.\n\n\n \n \n layout: Literal['panel', 'bubble'] | None\n\ndefault `= None`\n\nIf \"panel\", will display the chatbot in a llm style layout. If \"bubble\", will\ndisplay the chatbot with message bubbles, with the user and bot messages on\nalterating sides. Will default to \"bubble\".\n\n\n \n \n placeholder: str | None\n\ndefault `= None`\n\na placeholder message to display in the chatbot when it is empty. Centered\nvertically and horizontally in the Chatbot. Supports Markdown and HTML. If\nNone, no placeholder is displayed.\n\n\n \n \n examples: list[ExampleMessage] | None\n\ndefault `= None`\n\nA list of ex", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "ered\nvertically and horizontally in the Chatbot. Supports Markdown and HTML. If\nNone, no placeholder is displayed.\n\n\n \n \n examples: list[ExampleMessage] | None\n\ndefault `= None`\n\nA list of example messages to display in the chatbot before any user/assistant\nmessages are shown. Each example should be a dictionary with an optional\n\"text\" key representing the message that should be populated in the Chatbot\nwhen clicked, an optional \"files\" key, whose value should be a list of files\nto populate in the Chatbot, an optional \"icon\" key, whose value should be a\nfilepath or URL to an image to display in the example box, and an optional\n\"display_text\" key, whose value should be the text to display in the example\nbox. If \"display_text\" is not provided, the value of \"text\" will be displayed.\n\n\n \n \n allow_file_downloads: \n\ndefault `= True`\n\nIf True, will show a download button for chatbot messages that contain media.\nDefaults to True.\n\n\n \n \n group_consecutive_messages: bool\n\ndefault `= True`\n\nIf True, will display consecutive messages from the same role in the same\nbubble. If False, will display each message in a separate bubble. Defaults to\nTrue.\n\n\n \n \n allow_tags: list[str] | bool\n\ndefault `= True`\n\nIf a list of tags is provided, these tags will be preserved in the output\nchatbot messages, even if `sanitize_html` is `True`. For example, if this list\nis [\"thinking\"], the tags `` and `` will not be removed.\nIf True, all custom tags (non-standard HTML tags) will be preserved. If False,\nno tags will be preserved. Default value is 'True'.\n\n\n \n \n reasoning_tags: list[tuple[str, str]] | None\n\ndefault `= None`\n\nIf provided, a list of tuples of (open_tag, close_tag) strings. Any text\nbetween these tags will be extracted and displayed in a separate collapsible\nmessage with metadata={\"title\": \"Reasoning\"}. For example, [(\"\",\n\"\")] will extract content between and \",\n\"\")] will extract content between and tags.\nEach thinking block will be displayed as a separate collapsible message before\nthe main response. If None (default), no automatic extraction is performed.\n\n\n \n \n like_user_message: bool\n\ndefault `= False`\n\nIf True, will show like/dislike buttons for user messages as well. Defaults to\nFalse.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Chatbot`| \"chatbot\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "**Displaying Thoughts/Tool Usage**\n\nYou can provide additional metadata regarding any tools used to generate the\nresponse. This is useful for displaying the thought process of LLM agents. For\nexample,\n\n \n \n def generate_response(history):\n history.append(\n ChatMessage(role=\"assistant\",\n content=\"The weather API says it is 20 degrees Celcius in New York.\",\n metadata={\"title\": \"\ud83d\udee0\ufe0f Used tool Weather API\"})\n )\n return history\n\nWould be displayed as following:\n\n![Gradio chatbot tool display](https://github.com/user-\nattachments/assets/c1514bc9-bc29-4af1-8c3f-cd4a7c2b217f)\n\nYou can also specify metadata with a plain python dictionary,\n\n \n \n def generate_response(history):\n history.append(\n dict(role=\"assistant\",\n content=\"The weather API says it is 20 degrees Celcius in New York.\",\n metadata={\"title\": \"\ud83d\udee0\ufe0f Used tool Weather API\"})\n )\n return history\n\n**Using Gradio Components Inside`gr.Chatbot`**\n\nThe `Chatbot` component supports using many of the core Gradio components\n(such as `gr.Image`, `gr.Plot`, `gr.Audio`, and `gr.HTML`) inside of the\nchatbot. Simply include one of these components as the `content` of a message.\nHere\u2019s an example:\n\n \n \n import gradio as gr\n \n def load():\n return [\n {\"role\": \"user\", \"content\": \"Can you show me some media?\"},\n {\"role\": \"assistant\", \"content\": \"Here's an audio clip:\"},\n {\"role\": \"assistant\", \"content\": gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/gradio/media_assets/audio/audio_sample.wav\")},\n {\"role\": \"assistant\", \"content\": \"And here's a video:\"},\n {\"role\": \"assistant\", \"content\": gr.Video(\"https://github.com/gradio-app/gradio/raw/main/gradio/media_assets/videos/world.mp4\")}\n ]\n \n with gr.Blocks() as demo:\n chatbot = gr.Chatbot()\n button = gr.Button(\"Load ", "heading1": "Examples", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "deo(\"https://github.com/gradio-app/gradio/raw/main/gradio/media_assets/videos/world.mp4\")}\n ]\n \n with gr.Blocks() as demo:\n chatbot = gr.Chatbot()\n button = gr.Button(\"Load audio and video\")\n button.click(load, None, chatbot)\n \n demo.launch()\n\n", "heading1": "Examples", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "chatbot_simplechatbot_streamingchatbot_with_toolschatbot_core_components\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Chatbot component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Chatbot.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Chatbot changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Chatbot.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the Chatbot. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the Chatbot, and `selected` to refer to state of the\nChatbot. See for more\ndetails. \n`Chatbot.like(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nlikes/dislikes from within the Chatbot. This event has EventData of type\ngradio.LikeData that carries information, accessible through LikeData.index\nand LikeData.value. See EventData documentation on how to use this event data. \n`Chatbot.retry(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clicks the\nretry button in the chatbot message. \n`Chatbot.undo(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clicks the\nundo button in the chatbot message. \n`Chatbot.example_select(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nclicks on an example from within the Chatbot. This event has SelectData of\ntype gradio.SelectData that carries information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.option_select(fn, \u00b7\u00b7\u00b7)`| This listener i", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Data that carries information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.option_select(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nclicks on an option from within the Chatbot. This event has SelectData of type\ngradio.SelectData that carries information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.clear(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clears the\nChatbot using the clear button for the component. \n`Chatbot.copy(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user copies\ncontent from the Chatbot. Uses event data gradio.CopyData to carry information\nabout the copied content. See EventData documentation on how to use this event\ndata \n`Chatbot.edit(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user edits the\nChatbot (e.g. image) using the built-in editor. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The func", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate(", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "e executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "", "heading1": "Helper Classes", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "gradio.ChatMessage(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass that represents a message in the Chatbot component (with\ntype=\"messages\"). The only required field is `content`. The value of\n`gr.Chatbot` is a list of these dataclasses.\n\nParameters \u25bc\n\n\n \n \n content: MessageContent | list[MessageContent]\n\nThe content of the message. Can be a string, a file dict, a gradio component,\nor a list of these types to group these messages together.\n\n\n \n \n role: Literal['user', 'assistant', 'system']\n\ndefault `= \"assistant\"`\n\nThe role of the message, which determines the alignment of the message in the\nchatbot. Can be \"user\", \"assistant\", or \"system\". Defaults to \"assistant\".\n\n\n \n \n metadata: MetadataDict\n\ndefault `= _HAS_DEFAULT_FACTORY_CLASS()`\n\nThe metadata of the message, which is used to display intermediate thoughts /\ntool usage. Should be a dictionary with the following keys: \"title\" (required\nto display the thought), and optionally: \"id\" and \"parent_id\" (to nest\nthoughts), \"duration\" (to display the duration of the thought), \"status\" (to\ndisplay the status of the thought).\n\n\n \n \n options: list[OptionDict]\n\ndefault `= _HAS_DEFAULT_FACTORY_CLASS()`\n\nThe options of the message. A list of Option objects, which are dictionaries\nwith the following keys: \"label\" (the text to display in the option), and\noptionally \"value\" (the value to return when the option is selected if\ndifferent from the label).\n\n", "heading1": "ChatMessage", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "A typed dictionary to represent metadata for a message in the Chatbot\ncomponent. An instance of this dictionary is used for the `metadata` field in\na ChatMessage when the chat message should be displayed as a thought.\n\nKeys \u25bc\n\n\n \n \n title: str\n\nThe title of the 'thought' message. Only required field.\n\n\n \n \n id: int | str\n\nThe ID of the message. Only used for nested thoughts. Nested thoughts can be\nnested by setting the parent_id to the id of the parent thought.\n\n\n \n \n parent_id: int | str\n\nThe ID of the parent message. Only used for nested thoughts.\n\n\n \n \n log: str\n\nA string message to display next to the thought title in a subdued font.\n\n\n \n \n duration: float\n\nThe duration of the message in seconds. Appears next to the thought title in a\nsubdued font inside a parentheses.\n\n\n \n \n status: Literal['pending', 'done']\n\nif set to `'pending'`, a spinner appears next to the thought title and the\naccordion is initialized open. If `status` is `'done'`, the thought accordion\nis initialized closed. If `status` is not provided, the thought accordion is\ninitialized open and no spinner is displayed.\n\n", "heading1": "MetadataDict", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "A typed dictionary to represent an option in a ChatMessage. A list of these\ndictionaries is used for the `options` field in a ChatMessage.\n\nKeys \u25bc\n\n\n \n \n value: str\n\nThe value to return when the option is selected.\n\n\n \n \n label: str\n\nThe text to display in the option, if different from the value.\n\n", "heading1": "OptionDict", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "This component displays a table of value spreadsheet-like component. Can be\nused to display data as an output component, or as an input to collect data\nfrom the user.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "**As input component** : Passes the uploaded spreadsheet data as a\n`pandas.DataFrame`, `numpy.array`, `polars.DataFrame`, or native 2D Python\n`list[list]` depending on `type`\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: pd.DataFrame | np.ndarray | pl.DataFrame | list[list]\n )\n \t...\n\n \n\n**As output component** : Expects data in any of these formats:\n`pandas.DataFrame`, `pandas.Styler`, `numpy.array`, `polars.DataFrame`,\n`list[list]`, `list`, or a `dict` with keys 'data' (and optionally 'headers'),\nor `str` path to a csv, which is rendered as the spreadsheet.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> pd.DataFrame | Styler | np.ndarray | pl.DataFrame | list | list[list] | dict | str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: pd.DataFrame | Styler | np.ndarray | pl.DataFrame | list | list[list] | dict | str | Callable | None\n\ndefault `= None`\n\nDefault value to display in the DataFrame. Supports pandas, numpy, polars, and\nlist of lists. If a Styler is provided, it will be used to set the displayed\nvalue in the DataFrame (e.g. to set precision of numbers) if the `interactive`\nis False. If a Callable function is provided, the function will be called\nwhenever the app loads to set the initial value of the component.\n\n\n \n \n headers: list[str] | None\n\ndefault `= None`\n\nList of str header names. These are used to set the column headers of the\ndataframe if the value does not have headers. If None, no headers are shown.\n\n\n \n \n row_count: int | None\n\ndefault `= None`\n\nThe number of rows to initially display in the dataframe. If None, the number\nof rows is determined automatically based on the `value`.\n\n\n \n \n row_limits: tuple[int | None, int | None] | None\n\ndefault `= None`\n\nA tuple of two integers specifying the minimum and maximum number of rows that\ncan be created in the dataframe via the UI. If the first element is None,\nthere is no minimum number of rows. If the second element is None, there is no\nmaximum number of rows. Only applies if `interactive` is True.\n\n\n \n \n col_count: None\n\ndefault `= None`\n\nThis parameter is deprecated. Please use `column_count` instead.\n\n\n \n \n column_count: int | None\n\ndefault `= None`\n\nThe number of columns to initially display in the dataframe. If None, the\nnumber of columns is determined automatically based on the `value`.\n\n\n \n \n column_limits: tuple[int | None, int | None] | None\n\ndefault `= None`\n\nA tuple of two integers specifying the minimum and maximum number of columns\nthat can be created in the dataframe via the UI. If the first element is None,\nthere is no minimum number of columns. If the second element is None, there is\nno maximum number of columns. Only applies if", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "t can be created in the dataframe via the UI. If the first element is None,\nthere is no minimum number of columns. If the second element is None, there is\nno maximum number of columns. Only applies if `interactive` is True.\n\n\n \n \n datatype: Literal['str', 'number', 'bool', 'date', 'markdown', 'html', 'image', 'auto'] | list[Literal['str', 'number', 'bool', 'date', 'markdown', 'html']]\n\ndefault `= \"str\"`\n\nDatatype of values in sheet. Can be provided per column as a list of strings,\nor for the entire sheet as a single string. Valid datatypes are \"str\",\n\"number\", \"bool\", \"date\", and \"markdown\". Boolean columns will display as\ncheckboxes. If the datatype \"auto\" is used, the column datatypes are\nautomatically selected based on the value input if possible.\n\n\n \n \n type: Literal['pandas', 'numpy', 'array', 'polars']\n\ndefault `= \"pandas\"`\n\nType of value to be returned by component. \"pandas\" for pandas dataframe,\n\"numpy\" for numpy array, \"polars\" for polars dataframe, or \"array\" for a\nPython list of lists.\n\n\n \n \n latex_delimiters: list[dict[str, str | bool]] | None\n\ndefault `= None`\n\nA list of dicts of the form {\"left\": open delimiter (str), \"right\": close\ndelimiter (str), \"display\": whether to display in newline (bool)} that will be\nused to render LaTeX expressions. If not provided, `latex_delimiters` is set\nto `[{ \"left\": \"$$\", \"right\": \"$$\", \"display\": True }]`, so only expressions\nenclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass\nin an empty list to disable LaTeX rendering. For more information, see the\n[KaTeX documentation](https://katex.org/docs/autorender.html). Only applies to\ncolumns whose datatype is \"markdown\".\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "e the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n max_height: int | str\n\ndefault `= 500`\n\nThe maximum height of the dataframe, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If more rows are created than\ncan fit in the height, a scrollbar will appear.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will allow users to edit the dataframe; if False, can only be used to\ndisplay data. If not provided, this is inferred based on whether the component\nis used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, co", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": " used to\ndisplay data. If not provided, this is inferred based on whether the component\nis used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n wrap: bool\n\ndefault `= False`\n\nIf True, the text in table cells will wrap when appropriate. If False and the\n`column_width` parameter is not set, the column widths will expand based on\nthe cell contents and the table may need to be horizontally scrolled. If\n`column_width` is set, then any overflow text will be hidden.\n\n\n \n \n line_breaks: bool\n\ndefault `= True`\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "nd based on\nthe cell contents and the table may need to be horizontally scrolled. If\n`column_width` is set, then any overflow text will be hidden.\n\n\n \n \n line_breaks: bool\n\ndefault `= True`\n\nIf True (default), will enable Github-flavored Markdown line breaks in chatbot\nmessages. If False, single new lines will be ignored. Only applies for columns\nof type \"markdown.\"\n\n\n \n \n column_widths: list[str | int] | None\n\ndefault `= None`\n\nAn optional list representing the width of each column. The elements of the\nlist should be in the format \"100px\" (ints are also accepted and converted to\npixel values) or \"10%\". The percentage width is calculated based on the\nviewport width of the table. If not provided, the column widths will be\nautomatically determined based on the content of the cells.\n\n\n \n \n buttons: list[Literal['fullscreen', 'copy']] | None\n\ndefault `= None`\n\nA list of buttons to show in the top right corner of the component. Valid\noptions are \"fullscreen\" and \"copy\". The \"fullscreen\" button allows the user\nto view the table in fullscreen mode. The \"copy\" button allows the user to\ncopy the table data to the clipboard. By default, all buttons are shown.\n\n\n \n \n show_row_numbers: bool\n\ndefault `= False`\n\nIf True, will display row numbers in a separate column.\n\n\n \n \n max_chars: int | None\n\ndefault `= None`\n\nMaximum number of characters to display in each cell before truncating\n(single-clicking a cell value will still reveal the full content). If None, no\ntruncation is applied.\n\n\n \n \n show_search: Literal['none', 'search', 'filter']\n\ndefault `= \"none\"`\n\nShow a search input in the toolbar. If \"search\", a search input is shown. If\n\"filter\", a search input and filter buttons are shown. If \"none\", no search\ninput is shown.\n\n\n \n \n pinned_columns: int | None\n\ndefault `= None`\n\nIf provided, will pin the specified number of columns from the left.\n\n\n \n \n static_columns: list[int] | None\n\ndefault `= None`\n\nList o", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": " \n \n pinned_columns: int | None\n\ndefault `= None`\n\nIf provided, will pin the specified number of columns from the left.\n\n\n \n \n static_columns: list[int] | None\n\ndefault `= None`\n\nList of column indices (int) that should not be editable. Only applies when\ninteractive=True. When specified, col_count is automatically set to \"fixed\"\nand columns cannot be inserted or deleted.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Dataframe`| \"dataframe\"| Uses default values \n`gradio.Numpy`| \"numpy\"| Uses type=\"numpy\" \n`gradio.Matrix`| \"matrix\"| Uses type=\"array\" \n`gradio.List`| \"list\"| Uses type=\"array\", col_count=1 \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "filter_recordsmatrix_transposetax_calculatorsort_records\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Dataframe component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Dataframe.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Dataframe changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Dataframe.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes\nthe value of the Dataframe. \n`Dataframe.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the Dataframe. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the Dataframe, and `selected` to refer to state of\nthe Dataframe. See for\nmore details. \n`Dataframe.edit(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user edits the\nDataframe (e.g. image) using the built-in editor. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inp", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": " inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": " show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are al", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "ays_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-r", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": " \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dataframe", "source_page_title": "Gradio - Dataframe Docs"}, {"text": "Creates a file explorer component that allows users to browse files on the\nmachine hosting the Gradio app. As an input component, it also allows users to\nselect files to be used as input to a function, while as an output component,\nit displays selected files.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "**As input component** : Passes the selected file or directory as a `str`\npath (relative to `root`) or `list[str}` depending on `file_count`\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: list[str] | str | None\n )\n \t...\n\n \n\n**As output component** : Expects function to return a `str` path to a\nfile, or `list[str]` consisting of paths to files.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | list[str] | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n glob: str\n\ndefault `= \"**/*\"`\n\nThe glob-style pattern used to select which files to display, e.g. \"*\" to\nmatch all files, \"*.png\" to match all .png files, \"**/*.txt\" to match any .txt\nfile in any subdirectory, etc. The default value matches all files and folders\nrecursively. See the Python glob documentation at\nhttps://docs.python.org/3/library/glob.html for more information.\n\n\n \n \n value: str | list[str] | Callable | None\n\ndefault `= None`\n\nThe file (or list of files, depending on the `file_count` parameter) to show\nas \"selected\" when the component is first loaded. If a callable is provided,\nit will be called when the app loads to set the initial value of the\ncomponent. If not provided, no files are shown as selected.\n\n\n \n \n file_count: Literal['single', 'multiple']\n\ndefault `= \"multiple\"`\n\nWhether to allow single or multiple files to be selected. If \"single\", the\ncomponent will return a single absolute file path as a string. If \"multiple\",\nthe component will return a list of absolute file paths as a list of strings.\n\n\n \n \n root_dir: str | Path\n\ndefault `= \".\"`\n\nPath to root directory to select files from. If not provided, defaults to\ncurrent working directory. Raises ValueError if the directory does not exist.\n\n\n \n \n ignore_glob: str | None\n\ndefault `= None`\n\nThe glob-style, case-sensitive pattern that will be used to exclude files from\nthe list. For example, \"*.py\" will exclude all .py files from the list. See\nthe Python glob documentation at https://docs.python.org/3/library/glob.html\nfor more information.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "onent. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n height: int | str | None\n\ndefault `= None`\n\nThe maximum height of the file component, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If more files are uploaded than\ncan fit in the height, a scrollbar will appear.\n\n\n \n \n max_height: int | str | None\n\ndefault `= 500`\n\n\n \n \n min_height: int | str | None\n\ndefault `= None`\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will allow users to select file(s); if False, will only display\nfile", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "\n\n \n \n min_height: int | str | None\n\ndefault `= None`\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will allow users to select file(s); if False, will only display\nfiles. If not provided, this is inferred based on whether the component is\nused as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": " \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.FileExplorer`| \"fileexplorer\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe FileExplorer component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`FileExplorer.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the FileExplorer\nchanges either because of user input (e.g. a user types in a textbox) OR\nbecause of a function update (e.g. an image receives a value from the output\nof an event trigger). See `.input()` for a listener that is only triggered by\nuser input. \n`FileExplorer.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nchanges the value of the FileExplorer. \n`FileExplorer.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the FileExplorer. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the FileExplorer, and `selected` to refer to state\nof the FileExplorer. See \nfor more details. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component |", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": " None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setti", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "rue`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a s", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "low any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptiona", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "tener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/fileexplorer", "source_page_title": "Gradio - Fileexplorer Docs"}, {"text": "Used to create an upload button, when clicked allows a user to upload files\nthat satisfy the specified file type or generic files (if file_type not set). \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "**As input component** : Passes the file as a `str` or `bytes` object, or a\nlist of `str` or list of `bytes` objects, depending on `type` and\n`file_count`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: bytes | str | list[bytes] | list[str] | None\n )\n \t...\n\n \n\n**As output component** : Expects a `str` filepath or URL, or a `list[str]`\nof filepaths/URLs.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | list[str] | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n label: str\n\ndefault `= \"Upload a File\"`\n\nText to display on the button. Defaults to \"Upload a File\".\n\n\n \n \n value: str | I18nData | list[str] | Callable | None\n\ndefault `= None`\n\nFile or list of files to upload by default.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n variant: Literal['primary', 'secondary', 'stop']\n\ndefault `= \"secondary\"`\n\n'primary' for main call-to-action, 'secondary' for a more subdued style,\n'stop' for a stop button.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n size: Literal['sm', 'md', 'lg']\n\ndefault `= \"lg\"`\n\nsize of the button. Can be \"sm\", \"md\", or \"lg\".\n\n\n \n \n icon: str | None\n\ndefault `= None`\n\nURL or path to the icon file to display within the button. If None, no icon\nwill be displayed.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int | None\n\ndefault `= None`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_widt", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "idth: int | None\n\ndefault `= None`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\nIf False, the UploadButton will be in a disabled state.\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n type: Literal['filepath', 'binary']\n\ndefault `= \"filepath\"`\n\nType of value to be returned by component. \"file\" returns a temporary file\nobject with the same base name as the uploaded file, whose full path can be\nretrieved by file_obj.name, \"binary\" returns an bytes object.\n\n\n \n \n file_count: Literal['single', 'multiple', 'directory']\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "h the same base name as the uploaded file, whose full path can be\nretrieved by file_obj.name, \"binary\" returns an bytes object.\n\n\n \n \n file_count: Literal['single', 'multiple', 'directory']\n\ndefault `= \"single\"`\n\nif single, allows user to upload one file. If \"multiple\", user uploads\nmultiple files. If \"directory\", user uploads all files in selected directory.\nReturn type will be list for each file in case of \"multiple\" or \"directory\".\n\n\n \n \n file_types: list[str] | None\n\ndefault `= None`\n\nList of type of files to be uploaded. \"file\" allows any file to be uploaded,\n\"image\" allows only image files to be uploaded, \"audio\" allows only audio\nfiles to be uploaded, \"video\" allows only video files to be uploaded, \"text\"\nallows only text files to be uploaded.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.UploadButton`| \"uploadbutton\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "upload_and_downloadupload_button\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe UploadButton component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`UploadButton.click(fn, \u00b7\u00b7\u00b7)`| Triggered when the UploadButton is clicked. \n`UploadButton.upload(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nuploads a file into the UploadButton. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "component), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simul", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "fault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/uploadbutton", "source_page_title": "Gradio - Uploadbutton Docs"}, {"text": "The Progress class provides a custom progress tracker that is used in a\nfunction signature. To attach a Progress tracker to a function, simply add a\nparameter right after the input parameters that has a default value set to a\n`gradio.Progress()` instance. The Progress tracker can then be updated in the\nfunction by calling the Progress object or using the `tqdm` method on an\nIterable.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "import gradio as gr\n import time\n def my_function(x, progress=gr.Progress()):\n progress(0, desc=\"Starting...\")\n time.sleep(1)\n for i in progress.tqdm(range(100)):\n time.sleep(0.1)\n return x\n gr.Interface(my_function, gr.Textbox(), gr.Textbox()).queue().launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n track_tqdm: bool\n\ndefault `= False`\n\nIf True, the Progress object will track any tqdm.tqdm iterations with the tqdm\nlibrary in the function.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "", "heading1": "Methods", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Progress.__call__(progress, \u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fontic", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nUpdates progress tracker with progress and message text.\n\nParameters \u25bc\n\n\n \n \n progress: float | tuple[int, int | None] | None\n\nIf float, should be between 0 and 1 representing completion. If Tuple, first\nnumber represents steps completed, and second value represents total steps or\nNone if unknown. If None, hides progress bar.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total ", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "\nNone if unknown. If None, hides progress bar.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Progress.tqdm(iterable, \u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "2'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nAttaches progress tracker to iterable, like tqdm.\n\nParameters \u25bc\n\n\n \n \n iterable: Iterable | None\n\niterable to attach progress tracker to.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Progress.__call__(progress, \u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fontic", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nUpdates progress tracker with progress and message text.\n\nParameters \u25bc\n\n\n \n \n progress: float | tuple[int, int | None] | None\n\nIf float, should be between 0 and 1 representing completion. If Tuple, first\nnumber represents steps completed, and second value represents total steps or\nNone if unknown. If None, hides progress bar.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total ", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "\nNone if unknown. If None, hides progress bar.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "__call__", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Progress.tqdm(iterable, \u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "2'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nAttaches progress tracker to iterable, like tqdm.\n\nParameters \u25bc\n\n\n \n \n iterable: Iterable | None\n\niterable to attach progress tracker to.\n\n\n \n \n desc: str | None\n\ndefault `= None`\n\ndescription to display.\n\n\n \n \n total: int | float | None\n\ndefault `= None`\n\nestimated total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "total number of steps.\n\n\n \n \n unit: str\n\ndefault `= \"steps\"`\n\nunit of iterations.\n\n", "heading1": "tqdm", "source_page_url": "https://gradio.app/docs/gradio/progress", "source_page_title": "Gradio - Progress Docs"}, {"text": "Creates a video component that can be used to upload/record videos (as an\ninput) or display videos (as an output). For the video to be playable in the\nbrowser it must have a compatible container and codec combination. Allowed\ncombinations are .mp4 with h264 codec, .ogg with theora codec, and .webm with\nvp9 codec. If the component detects that the output video would not be\nplayable in the browser it will attempt to convert it to a playable mp4 video.\nIf the conversion fails, the original video is returned. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "**As input component** : Passes the uploaded video as a `str` filepath or\nURL whose extension can be modified by `format`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | None\n )\n \t...\n\n \n\n**As output component** : Expects a `str` or `pathlib.Path` filepath to a video which is displayed, or a `Tuple[str | pathlib.Path, str | pathlib.Path | None]` where the first element is a filepath to a video and the second element is an optional filepath to a subtitle file.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | Path | tuple[str | Path, str | Path | None] | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: str | Path | Callable | None\n\ndefault `= None`\n\npath or URL for the default value that Video component is going to take. Or\ncan be callable, in which case the function will be called whenever the app\nloads to set the initial value of the component.\n\n\n \n \n format: str | None\n\ndefault `= None`\n\nthe file extension with which to save video, such as 'avi' or 'mp4'. This\nparameter applies both when this component is used as an input to determine\nwhich file format to convert user-provided video to, and when this component\nis used as an output to determine the format of video returned to the user. If\nNone, no file format conversion is done and the video is kept as is. Use 'mp4'\nto ensure browser playability.\n\n\n \n \n sources: list[Literal['upload', 'webcam']] | Literal['upload', 'webcam'] | None\n\ndefault `= None`\n\nlist of sources permitted for video. \"upload\" creates a box where user can\ndrop a video file, \"webcam\" allows user to record a video from their webcam.\nIf None, defaults to both [\"upload, \"webcam\"].\n\n\n \n \n height: int | str | None\n\ndefault `= None`\n\nThe height of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. This has no effect on the preprocessed video\nfile, but will affect the displayed video.\n\n\n \n \n width: int | str | None\n\ndefault `= None`\n\nThe width of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. This has no effect on the preprocessed video\nfile, but will affect the displayed video.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\ncontinously calls `value` to reca", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "nd\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\ncontinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\ncomponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nif True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will allow users to upload a video; if False, can only be used to\ndisplay videos. If not provided, this is inferred based on whether the\ncomponent is used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nan optional string that i", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "e hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nan optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nan optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nif False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n webcam_options: WebcamOptions | None\n\ndefault `= None`\n\nA `gr.WebcamOptions` instance that allows developers to specify custom media\nconstraints for the webcam stream. This parameter provides flexibility to\ncontrol the video stream's properties, such as resolution and front or rear\ncamera on mobile devices. See $demo/webcam_constraints\n\n\n \n \n include_audio: bool | None\n\ndefault `= None`\n\nwhether the component should record/retain the audio track for a video. By\ndefault, audio is excluded for webcam videos and included for uploaded videos.\n\n\n \n \n autoplay: bool\n\nd", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "lt `= None`\n\nwhether the component should record/retain the audio track for a video. By\ndefault, audio is excluded for webcam videos and included for uploaded videos.\n\n\n \n \n autoplay: bool\n\ndefault `= False`\n\nwhether to automatically play the video when the component is used as an\noutput. Note: browsers will not autoplay video files if the user has not\ninteracted with the page yet.\n\n\n \n \n buttons: list[Literal['download', 'share'] | Button] | None\n\ndefault `= None`\n\nA list of buttons to show in the top right corner of the component. Valid\noptions are \"download\", \"share\", or a gr.Button() instance. The \"download\"\nbutton allows the user to save the video to their device. The \"share\" button\nallows the user to share the video via Hugging Face Spaces Discussions. Custom\ngr.Button() instances will appear in the toolbar with their configured icon\nand/or label, and clicking them will trigger any .click() events registered on\nthe button. By default, no buttons are shown if the component is interactive\nand both buttons are shown if the component is not interactive.\n\n\n \n \n loop: bool\n\ndefault `= False`\n\nif True, the video will loop when it reaches the end and continue playing from\nthe beginning.\n\n\n \n \n streaming: bool\n\ndefault `= False`\n\nwhen used set as an output, takes video chunks yielded from the backend and\ncombines them into one streaming video output. Each chunk should be a video\nfile with a .ts extension using an h.264 encoding. Mp4 files are also accepted\nbut they will be converted to h.264 encoding.\n\n\n \n \n watermark: WatermarkOptions | None\n\ndefault `= None`\n\nA `gr.WatermarkOptions` instance that includes an image file and position to\nbe used as a watermark on the video. The image is not scaled and is displayed\non the provided position on the video. Valid formats for the image are: jpeg,\npng.\n\n\n \n \n subtitles: str | Path | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA subtitle file (srt, vtt, or json) for the v", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": " position on the video. Valid formats for the image are: jpeg,\npng.\n\n\n \n \n subtitles: str | Path | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA subtitle file (srt, vtt, or json) for the video, or a list of subtitle\ndictionaries in the format [{\"text\": str, \"timestamp\": [start, end]}] where\ntimestamps are in seconds. JSON files should contain an array of subtitle\nobjects.\n\n\n \n \n playback_position: float\n\ndefault `= 0`\n\nThe starting playback position in seconds. This value is also updated as the\nvideo plays, reflecting the current playback position.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Video`| \"video\"| Uses default values \n`gradio.PlayableVideo`| \"playablevideo\"| Uses format=\"mp4\" \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "video_identity_2\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Video component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Video.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Video changes either\nbecause of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Video.clear(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clears the\nVideo using the clear button for the component. \n`Video.start_recording(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nstarts recording with the Video. \n`Video.stop_recording(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nstops recording with the Video. \n`Video.stop(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user reaches the\nend of the media playing in the Video. \n`Video.play(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user plays the\nmedia in the Video. \n`Video.pause(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the media in the Video\nstops for any reason. \n`Video.end(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user reaches the end\nof the media playing in the Video. \n`Video.upload(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user uploads a\nfile into the Video. \n`Video.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes the\nvalue of the Video. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each para", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hid", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "s animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Function", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "s to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "his endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "", "heading1": "Helper Classes", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "gradio.WebcamOptions(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass for specifying options for the webcam tool in the ImageEditor\ncomponent. An instance of this class can be passed to the `webcam_options`\nparameter of `gr.ImageEditor`.\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n mirror: bool\n\ndefault `= True`\n\nIf True, the webcam will be mirrored.\n\n\n \n \n constraints: dict[str, Any] | None\n\ndefault `= None`\n\nA dictionary of constraints for the webcam.\n\n", "heading1": "Webcam Options", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "Validates that the audio length is within the specified min and max length (in\nseconds). You can use this to construct a validator that will check if the\nuser-provided audio is either too short or too long.\n\n \n \n import gradio as gr\n demo = gr.Interface(\n lambda x: x,\n inputs=\"video\",\n outputs=\"video\",\n validator=lambda video: gr.validators.is_video_correct_length(video, min_length=1, max_length=5)\n )\n demo.launch()\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n video: \n\nThe path to the video file.\n\n\n \n \n min_length: float | None\n\nMinimum length of video in seconds. If None, no minimum length check is\nperformed.\n\n\n \n \n max_length: float | None\n\nMaximum length of video in seconds. If None, no maximum length check is\nperformed.\n\n", "heading1": "is_video_correct_length", "source_page_url": "https://gradio.app/docs/gradio/video", "source_page_title": "Gradio - Video Docs"}, {"text": "Constructs a Gradio app automatically from a Hugging Face model/Space repo\nname or a 3rd-party API provider. Note that if a Space repo is loaded, certain\nhigh-level attributes of the Blocks (e.g. custom `css`, `js`, and `head`\nattributes) will not be loaded.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/load", "source_page_title": "Gradio - Load Docs"}, {"text": "import gradio as gr\n demo = gr.load(\"gradio/question-answering\", src=\"spaces\")\n demo.launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/load", "source_page_title": "Gradio - Load Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n name: str\n\nthe name of the model (e.g. \"google/vit-base-patch16-224\") or Space (e.g.\n\"flax-community/spanish-gpt2\"). This is the first parameter passed into the\n`src` function. Can also be formatted as {src}/{repo name} (e.g.\n\"models/google/vit-base-patch16-224\") if `src` is not provided.\n\n\n \n \n src: Callable[[str, str | None], Blocks] | Literal['models', 'spaces', 'huggingface'] | None\n\ndefault `= None`\n\nfunction that accepts a string model `name` and a string or None `token` and\nreturns a Gradio app. Alternatively, this parameter takes one of two strings\nfor convenience: \"models\" (for loading a Hugging Face model through the\nInference API) or \"spaces\" (for loading a Hugging Face Space). If None, uses\nthe prefix of the `name` parameter to determine `src`.\n\n\n \n \n token: str | None\n\ndefault `= None`\n\noptional token that is passed as the second parameter to the `src` function.\nIf not explicitly provided, will use the HF_TOKEN environment variable or\nfallback to the locally-saved HF token when loading models but not Spaces\n(when loading Spaces, only provide a token if you are loading a trusted\nprivate Space as the token can be read by the Space you are loading). Find\nyour HF tokens here: https://huggingface.co/settings/tokens.\n\n\n \n \n accept_token: bool | LoginButton\n\ndefault `= False`\n\nif True, a Textbox component is first rendered to allow the user to provide a\ntoken, which will be used instead of the `token` parameter when calling the\nloaded model or Space. Can also provide an instance of a gr.LoginButton in the\nsame Blocks scope, which allows the user to login with a Hugging Face account\nwhose token will be used instead of the `token` parameter when calling the\nloaded model or Space.\n\n\n \n \n provider: PROVIDER_T | None\n\ndefault `= None`\n\nthe name of the third-party (non-Hugging Face) providers to use for model\ninference (e.g. \"replicate\", \"sambanova\", \"fal-ai\", etc). Should be one of the\nproviders suppo", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/load", "source_page_title": "Gradio - Load Docs"}, {"text": "DER_T | None\n\ndefault `= None`\n\nthe name of the third-party (non-Hugging Face) providers to use for model\ninference (e.g. \"replicate\", \"sambanova\", \"fal-ai\", etc). Should be one of the\nproviders supported by `huggingface_hub.InferenceClient`. This parameter is\nonly used when `src` is \"models\"\n\n\n \n \n kwargs: \n\nadditional keyword parameters to pass into the `src` function. If `src` is\n\"models\" or \"Spaces\", these parameters are passed into the `gr.Interface` or\n`gr.ChatInterface` constructor.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/load", "source_page_title": "Gradio - Load Docs"}, {"text": "The FileData class is a subclass of the GradioModel class that represents a\nfile object within a Gradio interface. It is used to store file data and\nmetadata when a file is uploaded. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/filedata", "source_page_title": "Gradio - Filedata Docs"}, {"text": "from gradio_client import Client, FileData, handle_file\n \n def get_url_on_server(data: FileData):\n print(data['url'])\n \n client = Client(\"gradio/gif_maker_main\", download_files=False)\n job = client.submit([handle_file(\"./cheetah.jpg\")], api_name=\"/predict\")\n data = job.result()\n video: FileData = data['video']\n \n get_url_on_server(video)\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/filedata", "source_page_title": "Gradio - Filedata Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n path: str\n\nThe server file path where the file is stored.\n\n\n \n \n url: Optional[str]\n\nThe normalized server URL pointing to the file.\n\n\n \n \n size: Optional[int]\n\nThe size of the file in bytes.\n\n\n \n \n orig_name: Optional[str]\n\nThe original filename before upload.\n\n\n \n \n mime_type: Optional[str]\n\nThe MIME type of the file.\n\n\n \n \n is_stream: bool\n\nIndicates whether the file is a stream.\n\n\n \n \n meta: dict\n\nAdditional metadata used internally (should not be changed).\n\n", "heading1": "Attributes", "source_page_url": "https://gradio.app/docs/gradio/filedata", "source_page_title": "Gradio - Filedata Docs"}, {"text": "The gr.DeletedFileData class is a subclass of gr.EventData that\nspecifically carries information about the `.delete()` event. When\ngr.DeletedFileData is added as a type hint to an argument of an event listener\nmethod, a gr.DeletedFileData object will automatically be passed as the value\nof that argument. The attributes of this object contains information about the\nevent that triggered the listener.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "import gradio as gr\n \n def test(delete_data: gr.DeletedFileData):\n return delete_data.file.path\n \n with gr.Blocks() as demo:\n files = gr.File(file_count=\"multiple\")\n deleted_file = gr.File()\n files.delete(test, None, deleted_file)\n \n demo.launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n file: FileData\n\nThe file that was deleted, as a FileData object. The str path to the file can\nbe retrieved with the .path attribute.\n\n", "heading1": "Attributes", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "file_component_events\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "Special component that ticks at regular intervals when active. It is not\nvisible, and only used to trigger events at a regular interval through the\n`tick` event listener.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "**As input component** : The interval of the timer as a float.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: float | None\n )\n \t...\n\n \n\n**As output component** : The interval of the timer as a float or None.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> float | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: float\n\ndefault `= 1`\n\nInterval in seconds between each tick.\n\n\n \n \n active: bool\n\ndefault `= True`\n\nWhether the timer is active.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Timer`| \"timer\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Timer component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Timer.tick(fn, \u00b7\u00b7\u00b7)`| This listener is triggered at regular intervals defined\nby the Timer. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "oint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"hidden\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, w", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "atch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurren", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "usly. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Mount a gradio.Blocks to an existing FastAPI application. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/mount_gradio_app", "source_page_title": "Gradio - Mount_Gradio_App Docs"}, {"text": "from fastapi import FastAPI\n import gradio as gr\n app = FastAPI()\n @app.get(\"/\")\n def read_main():\n return {\"message\": \"This is your main app\"}\n io = gr.Interface(lambda x: \"Hello, \" + x + \"!\", \"textbox\", \"textbox\")\n app = gr.mount_gradio_app(app, io, path=\"/gradio\")\n\nThen run `uvicorn run:app` from the terminal and navigate to\nhttp://localhost:8000/gradio.\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/mount_gradio_app", "source_page_title": "Gradio - Mount_Gradio_App Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n app: fastapi.FastAPI\n\nThe parent FastAPI application.\n\n\n \n \n blocks: gradio.Blocks\n\nThe blocks object we want to mount to the parent app.\n\n\n \n \n path: str\n\nThe path at which the gradio application will be mounted, e.g. \"/gradio\".\n\n\n \n \n server_name: str\n\ndefault `= \"0.0.0.0\"`\n\nThe server name on which the Gradio app will be run.\n\n\n \n \n server_port: int\n\ndefault `= 7860`\n\nThe port on which the Gradio app will be run.\n\n\n \n \n footer_links: list[Literal['api', 'gradio', 'settings'] | dict[str, str]] | None\n\ndefault `= None`\n\nThe links to display in the footer of the app. Accepts a list, where each\nelement of the list must be one of \"api\", \"gradio\", or \"settings\"\ncorresponding to the API docs, \"built with Gradio\", and settings pages\nrespectively. If None, all three links will be shown in the footer. An empty\nlist means that no footer is shown.\n\n\n \n \n app_kwargs: dict[str, Any] | None\n\ndefault `= None`\n\nAdditional keyword arguments to pass to the underlying FastAPI app as a\ndictionary of parameter keys and argument values. For example, `{\"docs_url\":\n\"/docs\"}`\n\n\n \n \n auth: Callable | tuple[str, str] | list[tuple[str, str]] | None\n\ndefault `= None`\n\nIf provided, username and password (or list of username-password tuples)\nrequired to access the gradio app. Can also provide function that takes\nusername and password and returns True if valid login.\n\n\n \n \n auth_message: str | None\n\ndefault `= None`\n\nIf provided, HTML message provided on login page for this gradio app.\n\n\n \n \n auth_dependency: Callable[[fastapi.Request], str | None] | None\n\ndefault `= None`\n\nA function that takes a FastAPI request and returns a string user ID or None.\nIf the function returns None for a specific request, that user is not\nauthorized to access the gradio app (they will see a 401 Unauthorized\nresponse). To be used with external authentication systems like OAuth. Cannot\nbe used with `auth`.\n\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/mount_gradio_app", "source_page_title": "Gradio - Mount_Gradio_App Docs"}, {"text": "ic request, that user is not\nauthorized to access the gradio app (they will see a 401 Unauthorized\nresponse). To be used with external authentication systems like OAuth. Cannot\nbe used with `auth`.\n\n\n \n \n root_path: str | None\n\ndefault `= None`\n\nThe subpath corresponding to the public deployment of this FastAPI\napplication. For example, if the application is served at\n\"https://example.com/myapp\", the `root_path` should be set to \"/myapp\". A full\nURL beginning with http:// or https:// can be provided, which will be used in\nits entirety. Normally, this does not need to provided (even if you are using\na custom `path`). However, if you are serving the FastAPI app behind a proxy,\nthe proxy may not provide the full path to the Gradio app in the request\nheaders. In which case, you can provide the root path here.\n\n\n \n \n allowed_paths: list[str] | None\n\ndefault `= None`\n\nList of complete filepaths or parent directories that this gradio app is\nallowed to serve. Must be absolute paths. Warning: if you provide directories,\nany files in these directories or their subdirectories are accessible to all\nusers of your app.\n\n\n \n \n blocked_paths: list[str] | None\n\ndefault `= None`\n\nList of complete filepaths or parent directories that this gradio app is not\nallowed to serve (i.e. users of your app are not allowed to access). Must be\nabsolute paths. Warning: takes precedence over `allowed_paths` and all other\ndirectories exposed by Gradio by default.\n\n\n \n \n favicon_path: str | None\n\ndefault `= None`\n\nIf a path to a file (.png, .gif, or .ico) is provided, it will be used as the\nfavicon for this gradio app's page.\n\n\n \n \n show_error: bool\n\ndefault `= True`\n\nIf True, any errors in the gradio app will be displayed in an alert modal and\nprinted in the browser console log. Otherwise, errors will only be visible in\nthe terminal session running the Gradio app.\n\n\n \n \n max_file_size: str | int | None\n\ndefault `= None`\n\nThe maximum file size in ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/mount_gradio_app", "source_page_title": "Gradio - Mount_Gradio_App Docs"}, {"text": "browser console log. Otherwise, errors will only be visible in\nthe terminal session running the Gradio app.\n\n\n \n \n max_file_size: str | int | None\n\ndefault `= None`\n\nThe maximum file size in bytes that can be uploaded. Can be a string of the\nform \"\", where value is any positive integer and unit is one of\n\"b\", \"kb\", \"mb\", \"gb\", \"tb\". If None, no limit is set.\n\n\n \n \n ssr_mode: bool | None\n\ndefault `= None`\n\nIf True, the Gradio app will be rendered using server-side rendering mode,\nwhich is typically more performant and provides better SEO, but this requires\nNode 20+ to be installed on the system. If False, the app will be rendered\nusing client-side rendering mode. If None, will use GRADIO_SSR_MODE\nenvironment variable or default to False.\n\n\n \n \n node_server_name: str | None\n\ndefault `= None`\n\nThe name of the Node server to use for SSR. If None, will use\nGRADIO_NODE_SERVER_NAME environment variable or search for a node binary in\nthe system.\n\n\n \n \n node_port: int | None\n\ndefault `= None`\n\nThe port on which the Node server should run. If None, will use\nGRADIO_NODE_SERVER_PORT environment variable or find a free port.\n\n\n \n \n enable_monitoring: bool | None\n\ndefault `= None`\n\n\n \n \n pwa: bool | None\n\ndefault `= None`\n\n\n \n \n i18n: I18n | None\n\ndefault `= None`\n\nIf provided, the i18n instance to use for this gradio app.\n\n\n \n \n mcp_server: bool | None\n\ndefault `= None`\n\nIf True, the MCP server will be launched on the gradio app. If None, will use\nGRADIO_MCP_SERVER environment variable or default to False.\n\n\n \n \n theme: Theme | str | None\n\ndefault `= None`\n\nA Theme object or a string representing a theme. If a string, will look for a\nbuilt-in theme with that name (e.g. \"soft\" or \"default\"), or will attempt to\nload a theme from the Hugging Face Hub (e.g. \"gradio/monochrome\"). If None,\nwill use the Default theme.\n\n\n \n \n css: str | None\n\ndefault `= None`\n\nCustom css as a code stri", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/mount_gradio_app", "source_page_title": "Gradio - Mount_Gradio_App Docs"}, {"text": " or will attempt to\nload a theme from the Hugging Face Hub (e.g. \"gradio/monochrome\"). If None,\nwill use the Default theme.\n\n\n \n \n css: str | None\n\ndefault `= None`\n\nCustom css as a code string. This css will be included in the demo webpage.\n\n\n \n \n css_paths: str | Path | list[str | Path] | None\n\ndefault `= None`\n\nCustom css as a pathlib.Path to a css file or a list of such paths. This css\nfiles will be read, concatenated, and included in the demo webpage. If the\n`css` parameter is also set, the css from `css` will be included first.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nCustom js as a code string. The custom js should be in the form of a single js\nfunction. This function will automatically be executed when the page loads.\nFor more flexibility, use the head parameter to insert js inside \n\n\"\"\"\n\nwith gr.Blocks() as demo:\n gr.HTML(\"

My App

\")\n\ndemo.launch(head=head)\n```\n\nThe `head` parameter accepts any HTML tags you would normally insert into the `` of a page. For example, you can also include `` tags to `head` in order to update the social sharing preview for your Gradio app like this:\n\n```py\nimport gradio as gr\n\ncustom_head = \"\"\"\n\nSample App\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\"\"\"\n\nwith gr.Blocks(title=\"My App\") as demo:\n gr.HTML(\"

My App

\")\n\ndemo.launch(head=custom_head)\n```\n\n\n\nNote that injecting custom JS can affect browser behavior and accessibility (e.g. keyboard shortcuts may be lead to unexpected behavior if your Gradio app is embedded in another webpage). You should test your interface across different browsers and be mindful of how scripts may interact with browser defaults. Here's an example where pressing `Shift + s` triggers the `click` event of a specific `Button` component if the browser focus is _not_ on an input component (e.g. `Textbox` component):\n\n```python\nimport gradio as gr\n\nshortcut_js = \"\"\"\n\n\"\"\"\n\nwith gr.Blocks() as demo:\n action_button = gr.Button(value=\"Name\", elem_id=\"my_btn\")\n textbox = gr.Textbox()\n action_button.click(lambda : \"button pressed\", None, textbox)\n \ndemo.launch(head=shortcut_js)\n```\n\n", "heading1": "Adding custom JavaScript to your demo", "source_page_url": "https://gradio.app/guides/custom-CSS-and-JS", "source_page_title": "Building With Blocks - Custom Css And Js Guide"}, {"text": "Did you know that apart from being a full-stack machine learning demo, a Gradio Blocks app is also a regular-old python function!?\n\nThis means that if you have a gradio Blocks (or Interface) app called `demo`, you can use `demo` like you would any python function.\n\nSo doing something like `output = demo(\"Hello\", \"friend\")` will run the first event defined in `demo` on the inputs \"Hello\" and \"friend\" and store it\nin the variable `output`.\n\nIf I put you to sleep \ud83e\udd71, please bear with me! By using apps like functions, you can seamlessly compose Gradio apps.\nThe following section will show how.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "Let's say we have the following demo that translates english text to german text.\n\n$code_english_translator\n\nI already went ahead and hosted it in Hugging Face spaces at [gradio/english_translator](https://huggingface.co/spaces/gradio/english_translator).\n\nYou can see the demo below as well:\n\n$demo_english_translator\n\nNow, let's say you have an app that generates english text, but you wanted to additionally generate german text.\n\nYou could either:\n\n1. Copy the source code of my english-to-german translation and paste it in your app.\n\n2. Load my english-to-german translation in your app and treat it like a normal python function.\n\nOption 1 technically always works, but it often introduces unwanted complexity.\n\nOption 2 lets you borrow the functionality you want without tightly coupling our apps.\n\nAll you have to do is call the `Blocks.load` class method in your source file.\nAfter that, you can use my translation app like a regular python function!\n\nThe following code snippet and demo shows how to use `Blocks.load`.\n\nNote that the variable `english_translator` is my english to german app, but its used in `generate_text` like a regular function.\n\n$code_generate_english_german\n\n$demo_generate_english_german\n\n", "heading1": "Treating Blocks like functions", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "If the app you are loading defines more than one function, you can specify which function to use\nwith the `fn_index` and `api_name` parameters.\n\nIn the code for our english to german demo, you'll see the following line:\n\n```python\ntranslate_btn.click(translate, inputs=english, outputs=german, api_name=\"translate-to-german\")\n```\n\nThe `api_name` gives this function a unique name in our app. You can use this name to tell gradio which\nfunction in the upstream space you want to use:\n\n```python\nenglish_generator(text, api_name=\"translate-to-german\")[0][\"generated_text\"]\n```\n\nYou can also use the `fn_index` parameter.\nImagine my app also defined an english to spanish translation function.\nIn order to use it in our text generation app, we would use the following code:\n\n```python\nenglish_generator(text, fn_index=1)[0][\"generated_text\"]\n```\n\nFunctions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space,\nyou would use index 1.\n\n", "heading1": "How to control which function in the app to use", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "We showed how treating a Blocks app like a regular python helps you compose functionality across different apps.\nAny Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on\n[Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app.\nYou can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example.\n\nHappy building! \u2692\ufe0f\n", "heading1": "Parting Remarks", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "Global state in Gradio apps is very simple: any variable created outside of a function is shared globally between all users.\n\nThis makes managing global state very simple and without the need for external services. For example, in this application, the `visitor_count` variable is shared between all users\n\n```py\nimport gradio as gr\n\nShared between all users\nvisitor_count = 0\n\ndef increment_counter():\n global visitor_count\n visitor_count += 1\n return visitor_count\n\nwith gr.Blocks() as demo: \n number = gr.Textbox(label=\"Total Visitors\", value=\"Counting...\")\n demo.load(increment_counter, inputs=None, outputs=number)\n\ndemo.launch()\n```\n\nThis means that any time you do _not_ want to share a value between users, you should declare it _within_ a function. But what if you need to share values between function calls, e.g. a chat history? In that case, you should use one of the subsequent approaches to manage state.\n\n", "heading1": "Global State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Gradio supports session state, where data persists across multiple submits within a page session. To reiterate, session data is _not_ shared between different users of your model, and does _not_ persist if a user refreshes the page to reload the Gradio app. To store data in a session state, you need to do three things:\n\n1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor. Note that `gr.State` objects must be [deepcopy-able](https://docs.python.org/3/library/copy.html), otherwise you will need to use a different approach as described below.\n2. In the event listener, put the `State` object as an input and output as needed.\n3. In the event listener function, add the variable to the input parameters and the return value.\n\nLet's take a look at a simple example. We have a simple checkout app below where you add items to a cart. You can also see the size of the cart.\n\n$code_simple_state\n\nNotice how we do this with state:\n\n1. We store the cart items in a `gr.State()` object, initialized here to be an empty list.\n2. When adding items to the cart, the event listener uses the cart as both input and output - it returns the updated cart with all the items inside. \n3. We can attach a `.change` listener to cart, that uses the state variable as input as well.\n\nYou can think of `gr.State` as an invisible Gradio component that can store any kind of value. Here, `cart` is not visible in the frontend but is used for calculations.\n\nThe `.change` listener for a state variable triggers after any event listener changes the value of a state variable. If the state variable holds a sequence (like a `list`, `set`, or `dict`), a change is triggered if any of the elements inside change. If it holds an object or primitive, a change is triggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "riggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__hash__` implementation.\n\nThe value of a session State variable is cleared when the user refreshes the page. The value is stored on in the app backend for 60 minutes after the user closes the tab (this can be configured by the `delete_cache` parameter in `gr.Blocks`).\n\nLearn more about `State` in the [docs](https://gradio.app/docs/gradio/state).\n\n**What about objects that cannot be deepcopied?**\n\nAs mentioned earlier, the value stored in `gr.State` must be [deepcopy-able](https://docs.python.org/3/library/copy.html). If you are working with a complex object that cannot be deepcopied, you can take a different approach to manually read the user's `session_hash` and store a global `dictionary` with instances of your object for each user. Here's how you would do that:\n\n```py\nimport gradio as gr\n\nclass NonDeepCopyable:\n def __init__(self):\n from threading import Lock\n self.counter = 0\n self.lock = Lock() Lock objects cannot be deepcopied\n \n def increment(self):\n with self.lock:\n self.counter += 1\n return self.counter\n\nGlobal dictionary to store user-specific instances\ninstances = {}\n\ndef initialize_instance(request: gr.Request):\n instances[request.session_hash] = NonDeepCopyable()\n return \"Session initialized!\"\n\ndef cleanup_instance(request: gr.Request):\n if request.session_hash in instances:\n del instances[request.session_hash]\n\ndef increment_counter(request: gr.Request):\n if request.session_hash in instances:\n instance = instances[request.session_hash]\n return instance.increment()\n return \"Error: Session not initialized\"\n\nwith gr.Blocks() as demo:\n output = gr.Textbox(label=\"Status\")\n counter = gr.Number(label=\"Counter Value\")\n increment_btn = gr.Button(\"Increment Co", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": " return \"Error: Session not initialized\"\n\nwith gr.Blocks() as demo:\n output = gr.Textbox(label=\"Status\")\n counter = gr.Number(label=\"Counter Value\")\n increment_btn = gr.Button(\"Increment Counter\")\n increment_btn.click(increment_counter, inputs=None, outputs=counter)\n \n Initialize instance when page loads\n demo.load(initialize_instance, inputs=None, outputs=output) \n Clean up instance when page is closed/refreshed\n demo.unload(cleanup_instance) \n\ndemo.launch()\n```\n\n", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Gradio also supports browser state, where data persists in the browser's localStorage even after the page is refreshed or closed. This is useful for storing user preferences, settings, API keys, or other data that should persist across sessions. To use local state:\n\n1. Create a `gr.BrowserState` object. You can optionally provide an initial default value and a key to identify the data in the browser's localStorage.\n2. Use it like a regular `gr.State` component in event listeners as inputs and outputs.\n\nHere's a simple example that saves a user's username and password across sessions:\n\n$code_browserstate\n\nNote: The value stored in `gr.BrowserState` does not persist if the Grado app is restarted. To persist it, you can hardcode specific values of `storage_key` and `secret` in the `gr.BrowserState` component and restart the Gradio app on the same server name and server port. However, this should only be done if you are running trusted Gradio apps, as in principle, this can allow one Gradio app to access localStorage data that was created by a different Gradio app.\n", "heading1": "Browser State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Take a look at the demo below.\n\n$code_hello_blocks\n$demo_hello_blocks\n\n- First, note the `with gr.Blocks() as demo:` clause. The Blocks app code will be contained within this clause.\n- Next come the Components. These are the same Components used in `Interface`. However, instead of being passed to some constructor, Components are automatically added to the Blocks as they are created within the `with` clause.\n- Finally, the `click()` event listener. Event listeners define the data flow within the app. In the example above, the listener ties the two Textboxes together. The Textbox `name` acts as the input and Textbox `output` acts as the output to the `greet` method. This dataflow is triggered when the Button `greet_btn` is clicked. Like an Interface, an event listener can take multiple inputs or outputs.\n\nYou can also attach event listeners using decorators - skip the `fn` argument and assign `inputs` and `outputs` directly:\n\n$code_hello_blocks_decorator\n\n", "heading1": "Blocks Structure", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument, e.g. `gr.Textbox(interactive=True)`.\n\n```python\noutput = gr.Textbox(label=\"Output\", interactive=True)\n```\n\n_Note_: What happens if a Gradio component is neither an input nor an output? If a component is constructed with a default value, then it is presumed to be displaying content and is rendered non-interactive. Otherwise, it is rendered interactive. Again, this behavior can be overridden by specifying a value for the `interactive` argument.\n\n", "heading1": "Event Listeners and Interactivity", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Take a look at the demo below:\n\n$code_blocks_hello\n$demo_blocks_hello\n\nInstead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docscomponents) for the event listeners for each Component.\n\n", "heading1": "Types of Event Listeners", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "A Blocks app is not limited to a single data flow the way Interfaces are. Take a look at the demo below:\n\n$code_reversible_flow\n$demo_reversible_flow\n\nNote that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.\n\nHere's an example of a \"multi-step\" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).\n\n$code_blocks_speech_text_sentiment\n$demo_blocks_speech_text_sentiment\n\n", "heading1": "Multiple Data Flows", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "The event listeners you've seen so far have a single input component. If you'd like to have multiple input components pass data to the function, you have two options on how the function can accept input component values:\n\n1. as a list of arguments, or\n2. as a single dictionary of values, keyed by the component\n\nLet's see an example of each:\n$code_calculator_list_and_dict\n\nBoth `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.\n\n1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.\n2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). When you pass a set, the function `sub()` receives a single dictionary argument `data`, where the keys are the input components and the values are the values of those components.\n\nIt is a matter of preference which syntax you prefer! For functions with many input components, option 2 may be easier to manage.\n\n$demo_calculator_list_and_dict\n\n", "heading1": "Function Input List vs Set", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Similarly, you may return values for multiple output components either as:\n\n1. a list of values, or\n2. a dictionary keyed by the component\n\nLet's first see an example of (1), where we set the values of two output components by returning two values:\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return food - 1, \"full\"\n else:\n return 0, \"hungry\"\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nAbove, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.\n\n**Note:** if your event listener has a single output component, you should **not** return it as a single-item list. This will not work, since Gradio does not know whether to interpret that outer list as part of your return value. You should instead just return that value directly.\n\nNow, let's see option (2). Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return {food_box: food - 1, status_box: \"full\"}\n else:\n return {status_box: \"hungry\"}\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nNotice how when there is no food, we only update the `status_box` element. We skipped updating the `food_box` component.\n\nDictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.\n\nKeep in mind that with dictionary returns,", "heading1": "Function Return List vs Dict", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "d_box` component.\n\nDictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.\n\nKeep in mind that with dictionary returns, we still need to specify the possible outputs in the event listener.\n\n", "heading1": "Function Return List vs Dict", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "The return value of an event listener function is usually the updated value of the corresponding output Component. Sometimes we want to update the configuration of the Component as well, such as the visibility. In this case, we return a new Component, setting the properties we want to change.\n\n$code_blocks_essay_simple\n$demo_blocks_essay_simple\n\nSee how we can configure the Textbox itself through a new `gr.Textbox()` method. The `value=` argument can still be used to update the value along with Component configuration. Any arguments we do not set will preserve their previous values.\n\n", "heading1": "Updating Component Configurations", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "In some cases, you may want to leave a component's value unchanged. Gradio includes a special function, `gr.skip()`, which can be returned from your function. Returning this function will keep the output component (or components') values as is. Let us illustrate with an example:\n\n$code_skip\n$demo_skip\n\nNote the difference between returning `None` (which generally resets a component's value to an empty state) versus returning `gr.skip()`, which leaves the component value unchanged.\n\nTip: if you have multiple output components, and you want to leave all of their values unchanged, you can just return a single `gr.skip()` instead of returning a tuple of skips, one for each element.\n\n", "heading1": "Not Changing a Component's Value", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.\n\nFor example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.\n\n$code_chatbot_consecutive\n$demo_chatbot_consecutive\n\nThe `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`. Conversely, if you'd like to only run subsequent events if the previous event failed (i.e., raised an error), use the `.failure()` method. This is particularly useful for error handling workflows, such as displaying error messages or restoring previous states when an operation fails.\n\n", "heading1": "Running Events Consecutively", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Often times, you may want to bind multiple triggers to the same function. For example, you may want to allow a user to click a submit button, or press enter to submit a form. You can do this using the `gr.on` method and passing a list of triggers to the `trigger`.\n\n$code_on_listener_basic\n$demo_on_listener_basic\n\nYou can use decorator syntax as well:\n\n$code_on_listener_decorator\n\nYou can use `gr.on` to create \"live\" events by binding to the `change` event of components that implement it. If you do not specify any triggers, the function will automatically bind to all `change` event of all input components that include a `change` event (for example `gr.Textbox` has a `change` event whereas `gr.Button` does not).\n\n$code_on_listener_live\n$demo_on_listener_live\n\nYou can follow `gr.on` with `.then`, just like any regular event listener. This handy method should save you from having to write a lot of repetitive code!\n\n", "heading1": "Binding Multiple Triggers to a Function", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "If you want to set a Component's value to always be a function of the value of other Components, you can use the following shorthand:\n\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number(lambda a, b: a * b, inputs=[num1, num2])\n```\n\nThis functionally the same as:\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number()\n\n gr.on(\n [num1.change, num2.change, demo.load], \n lambda a, b: a * b, \n inputs=[num1, num2], \n outputs=product\n )\n```\n", "heading1": "Binding a Component Value Directly to a Function of Other Components", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "The `gr.HTML` component can also be used to create custom input components by triggering events. You will provide `js_on_load`, javascript code that runs when the component loads. The code has access to the `trigger` function to trigger events that Gradio can listen to, and the object `props` which has access to all the props of the component, including `value`.\n\n$code_star_rating_events\n$demo_star_rating_events\n\nTake a look at the `js_on_load` code above. We add click event listeners to each star image to update the value via `props.value` when a star is clicked. This also re-renders the template to show the updated value. We also add a click event listener to the submit button that triggers the `submit` event. In our app, we listen to this trigger to run a function that outputs the `value` of the star rating.\n\nYou can update any other props of the component via `props.`, and trigger events via `trigger('')`. The trigger event can also be send event data, e.g.\n\n```js\ntrigger('event_name', { key: value, count: 123 });\n```\n\nThis event data will be accessible the Python event listener functions via gr.EventData.\n\n```python\ndef handle_event(evt: gr.EventData):\n print(evt.key)\n print(evt.count)\n\nstar_rating.event(fn=handle_event, inputs=[], outputs=[])\n```\n\nKeep in mind that event listeners attached in `js_on_load` are only attached once when the component is first rendered. If your component creates new elements dynamically that need event listeners, attach the event listener to a parent element that exists when the component loads, and check for the target. For example:\n\n```js\nelement.addEventListener('click', (e) =>\n if (e.target && e.target.matches('.child-element')) {\n props.value = e.target.dataset.value;\n }\n);\n```\n\n", "heading1": "Triggering Events and Custom Input Components", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "If you are reusing the same HTML component in multiple places, you can create a custom component class by subclassing `gr.HTML` and setting default values for the templates and other arguments. Here's an example of creating a reusable StarRating component.\n\n$code_star_rating_component\n$demo_star_rating_component\n\nNote: Gradio requires all components to accept certain arguments, such as `render`. You do not need\nto handle these arguments, but you do need to accept them in your component constructor and pass\nthem to the parent `gr.HTML` class. Otherwise, your component may not behave correctly. The easiest\nway is to add `**kwargs` to your `__init__` method and pass it to `super().__init__()`, just like in the code example above.\n\nWe've created several custom HTML components as reusable components as examples you can reference in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).\n\nAPI / MCP support\n\nTo make your custom HTML component work with Gradio's built-in support for API and MCP (Model Context Protocol) usage, you need to define how its data should be serialized. There are two ways to do this:\n\n**Option 1: Define an `api_info()` method**\n\nAdd an `api_info()` method that returns a JSON schema dictionary describing your component's data format. This is what we do in the StarRating class above.\n\n**Option 2: Define a Pydantic data model**\n\nFor more complex data structures, you can define a Pydantic model that inherits from `GradioModel` or `GradioRootModel`:\n\n```python\nfrom gradio.data_classes import GradioModel, GradioRootModel\n\nclass MyComponentData(GradioModel):\n items: List[str]\n count: int\n\nclass MyComponent(gr.HTML):\n data_model = MyComponentData\n```\n\nUse `GradioModel` when your data is a dictionary with named fields, or `GradioRootModel` when your data is a simple type (string, list, etc.) that doesn't need to be wrapped in a dictionary. By defining a `data_model`, your component automaticall", "heading1": "Component Classes", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "ry with named fields, or `GradioRootModel` when your data is a simple type (string, list, etc.) that doesn't need to be wrapped in a dictionary. By defining a `data_model`, your component automatically implements API methods.\n\n", "heading1": "Component Classes", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Keep in mind that using `gr.HTML` to create custom components involves injecting raw HTML and JavaScript into your Gradio app. Be cautious about using untrusted user input into `html_template` and `js_on_load`, as this could lead to cross-site scripting (XSS) vulnerabilities. \n\nYou should also expect that any Python event listeners that take your `gr.HTML` component as input could have any arbitrary value passed to them, not just the values you expect the frontend to be able to set for `value`. Sanitize and validate user input appropriately in public applications.\n\n", "heading1": "Security Considerations", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Check out some examples of custom components that you can build in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn1 = gr.Button(\"Button 1\")\n btn2 = gr.Button(\"Button 2\")\n```\n\nYou can set every element in a Row to have the same height. Configure this with the `equal_height` argument.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row(equal_height=True):\n textbox = gr.Textbox()\n btn2 = gr.Button(\"Button 2\")\n```\n\nThe widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.\n\n- `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn0 = gr.Button(\"Button 0\", scale=0)\n btn1 = gr.Button(\"Button 1\", scale=1)\n btn2 = gr.Button(\"Button 2\", scale=2)\n```\n\n- `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.\n\nLearn more about Rows in the [docs](https://gradio.app/docs/row).\n\n", "heading1": "Rows", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:\n\n$code_rows_and_columns\n$demo_rows_and_columns\n\nSee how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.\n\nLearn more about Columns in the [docs](https://gradio.app/docs/column).\n\nFill Browser Height / Width\n\nTo make an app take the full width of the browser by removing the side padding, use `gr.Blocks(fill_width=True)`. \n\nTo make top level Components expand to take the full height of the browser, use `fill_height` and apply scale to the expanding Components.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks(fill_height=True) as demo:\n gr.Chatbot(scale=1)\n gr.Textbox(scale=0)\n```\n\n", "heading1": "Columns and Nesting", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Some components support setting height and width. These parameters accept either a number (interpreted as pixels) or a string. Using a string allows the direct application of any CSS unit to the encapsulating Block element.\n\nBelow is an example illustrating the use of viewport width (vw):\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n im = gr.ImageEditor(width=\"50vw\")\n\ndemo.launch()\n```\n\n", "heading1": "Dimensions", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "You can also create Tabs using the `with gr.Tab('tab_name'):` clause. Any component created inside of a `with gr.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.\n\nFor example:\n\n$code_blocks_flipper\n$demo_blocks_flipper\n\nAlso note the `gr.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gr.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.\n\nLearn more about [Tabs](https://gradio.app/docs/tab) and [Accordions](https://gradio.app/docs/accordion) in the docs.\n\n", "heading1": "Tabs and Accordions", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "The sidebar is a collapsible panel that renders child components on the left side of the screen and can be expanded or collapsed.\n\nFor example:\n\n$code_blocks_sidebar\n\nLearn more about [Sidebar](https://gradio.app/docs/gradio/sidebar) in the docs.\n\n\n", "heading1": "Sidebar", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "In order to provide a guided set of ordered steps, a controlled workflow, you can use the `Walkthrough` component with accompanying `Step` components.\n\nThe `Walkthrough` component has a visual style and user experience tailored for this usecase.\n\nAuthoring this component is very similar to `Tab`, except it is the app developers responsibility to progress through each step, by setting the appropriate ID for the parent `Walkthrough` which should correspond to an ID provided to an indvidual `Step`. \n\n$demo_walkthrough\n\nLearn more about [Walkthrough](https://gradio.app/docs/gradio/walkthrough) in the docs.\n\n\n", "heading1": "Multi-step walkthroughs", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Both Components and Layout elements have a `visible` argument that can set initially and also updated. Setting `gr.Column(visible=...)` on a Column can be used to show or hide a set of Components.\n\n$code_blocks_form\n$demo_blocks_form\n\n", "heading1": "Visibility", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "In some cases, you might want to define components before you actually render them in your UI. For instance, you might want to show an examples section using `gr.Examples` above the corresponding `gr.Textbox` input. Since `gr.Examples` requires as a parameter the input component object, you will need to first define the input component, but then render it later, after you have defined the `gr.Examples` object.\n\nThe solution to this is to define the `gr.Textbox` outside of the `gr.Blocks()` scope and use the component's `.render()` method wherever you'd like it placed in the UI.\n\nHere's a full code example:\n\n```python\ninput_textbox = gr.Textbox()\n\nwith gr.Blocks() as demo:\n gr.Examples([\"hello\", \"bonjour\", \"merhaba\"], input_textbox)\n input_textbox.render()\n```\n\nSimilarly, if you have already defined a component in a Gradio app, but wish to unrender it so that you can define in a different part of your application, then you can call the `.unrender()` method. In the following example, the `Textbox` will appear in the third column:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n gr.Markdown(\"Row 1\")\n textbox = gr.Textbox()\n with gr.Column():\n gr.Markdown(\"Row 2\")\n textbox.unrender()\n with gr.Column():\n gr.Markdown(\"Row 3\")\n textbox.render()\n\ndemo.launch()\n```\n\n", "heading1": "Defining and Rendering Components Separately", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).\n\nIn this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Our application will enable the following user experience:\n\n1. Users click a button to start recording their message\n2. The app detects when the user has finished speaking and stops recording\n3. The user's audio is passed to the omni model, which streams back a response\n4. After omni mini finishes speaking, the user's microphone is reactivated\n5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component\n\nLet's dive into the implementation details.\n\n", "heading1": "Application Overview", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.\n\nHere's our `process_audio` function:\n\n```python\nimport numpy as np\nfrom utils import determine_pause\n\ndef process_audio(audio: tuple, state: AppState):\n if state.stream is None:\n state.stream = audio[1]\n state.sampling_rate = audio[0]\n else:\n state.stream = np.concatenate((state.stream, audio[1]))\n\n pause_detected = determine_pause(state.stream, state.sampling_rate, state)\n state.pause_detected = pause_detected\n\n if state.pause_detected and state.started_talking:\n return gr.Audio(recording=False), state\n return None, state\n```\n\nThis function takes two inputs:\n1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)\n2. The current application state\n\nWe'll use the following `AppState` dataclass to manage our application state:\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass AppState:\n stream: np.ndarray | None = None\n sampling_rate: int = 0\n pause_detected: bool = False\n stopped: bool = False\n conversation: list = []\n```\n\nThe function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.\n\nThe implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).\n\n", "heading1": "Processing User Audio", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:\n\n```python\nimport io\nimport tempfile\nfrom pydub import AudioSegment\n\ndef response(state: AppState):\n if not state.pause_detected and not state.started_talking:\n return None, AppState()\n \n audio_buffer = io.BytesIO()\n\n segment = AudioSegment(\n state.stream.tobytes(),\n frame_rate=state.sampling_rate,\n sample_width=state.stream.dtype.itemsize,\n channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),\n )\n segment.export(audio_buffer, format=\"wav\")\n\n with tempfile.NamedTemporaryFile(suffix=\".wav\", delete=False) as f:\n f.write(audio_buffer.getvalue())\n \n state.conversation.append({\"role\": \"user\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/wav\"}})\n \n output_buffer = b\"\"\n\n for mp3_bytes in speaking(audio_buffer.getvalue()):\n output_buffer += mp3_bytes\n yield mp3_bytes, state\n\n with tempfile.NamedTemporaryFile(suffix=\".mp3\", delete=False) as f:\n f.write(output_buffer)\n \n state.conversation.append({\"role\": \"assistant\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/mp3\"}})\n yield None, AppState(conversation=state.conversation)\n```\n\nThis function:\n1. Converts the user's audio to a WAV file\n2. Adds the user's message to the conversation history\n3. Generates and streams the chatbot's response using the `speaking` function\n4. Saves the chatbot's response as an MP3 file\n5. Adds the chatbot's response to the conversation history\n\nNote: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).\n\n", "heading1": "Generating the Response", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Now let's put it all together using Gradio's Blocks API:\n\n```python\nimport gradio as gr\n\ndef start_recording_user(state: AppState):\n if not state.stopped:\n return gr.Audio(recording=True)\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n input_audio = gr.Audio(\n label=\"Input Audio\", sources=\"microphone\", type=\"numpy\"\n )\n with gr.Column():\n chatbot = gr.Chatbot(label=\"Conversation\")\n output_audio = gr.Audio(label=\"Output Audio\", streaming=True, autoplay=True)\n state = gr.State(value=AppState())\n\n stream = input_audio.stream(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n stream_every=0.5,\n time_limit=30,\n )\n respond = input_audio.stop_recording(\n response,\n [state],\n [output_audio, state]\n )\n respond.then(lambda s: s.conversation, [state], [chatbot])\n\n restart = output_audio.stop(\n start_recording_user,\n [state],\n [input_audio]\n )\n cancel = gr.Button(\"Stop Conversation\", variant=\"stop\")\n cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,\n [state, input_audio], cancels=[respond, restart])\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThis setup creates a user interface with:\n- An input audio component for recording user messages\n- A chatbot component to display the conversation history\n- An output audio component for the chatbot's responses\n- A button to stop and reset the conversation\n\nThe app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.\n\n", "heading1": "Building the Gradio App", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini\n\nFeel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Just like the classic Magic 8 Ball, a user should ask it a question orally and then wait for a response. Under the hood, we'll use Whisper to transcribe the audio and then use an LLM to generate a magic-8-ball-style answer. Finally, we'll use Parler TTS to read the response aloud.\n\n", "heading1": "The Overview", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "First let's define the UI and put placeholders for all the python logic.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as block:\n gr.HTML(\n f\"\"\"\n

Magic 8 Ball \ud83c\udfb1

\n

Ask a question and receive wisdom

\n

Powered by Parler-TTS\n \"\"\"\n )\n with gr.Group():\n with gr.Row():\n audio_out = gr.Audio(label=\"Spoken Answer\", streaming=True, autoplay=True)\n answer = gr.Textbox(label=\"Answer\")\n state = gr.State()\n with gr.Row():\n audio_in = gr.Audio(label=\"Speak your question\", sources=\"microphone\", type=\"filepath\")\n\n audio_in.stop_recording(generate_response, audio_in, [state, answer, audio_out])\\\n .then(fn=read_response, inputs=state, outputs=[answer, audio_out])\n\nblock.launch()\n```\n\nWe're placing the output Audio and Textbox components and the input Audio component in separate rows. In order to stream the audio from the server, we'll set `streaming=True` in the output Audio component. We'll also set `autoplay=True` so that the audio plays as soon as it's ready.\nWe'll be using the Audio input component's `stop_recording` event to trigger our application's logic when a user stops recording from their microphone.\n\nWe're separating the logic into two parts. First, `generate_response` will take the recorded audio, transcribe it and generate a response with an LLM. We're going to store the response in a `gr.State` variable that then gets passed to the `read_response` function that generates the audio.\n\nWe're doing this in two parts because only `read_response` will require a GPU. Our app will run on Hugging Faces [ZeroGPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU func", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "GPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU function as it will needlessly use our GPU quota.\n\n", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "As mentioned above, we'll use [Hugging Face's Inference API](https://huggingface.co/docs/huggingface_hub/guides/inference) to transcribe the audio and generate a response from an LLM. After instantiating the client, I use the `automatic_speech_recognition` method (this automatically uses Whisper running on Hugging Face's Inference Servers) to transcribe the audio. Then I pass the question to an LLM (Mistal-7B-Instruct) to generate a response. We are prompting the LLM to act like a magic 8 ball with the system message.\n\nOur `generate_response` function will also send empty updates to the output textbox and audio components (returning `None`). \nThis is because I want the Gradio progress tracker to be displayed over the components but I don't want to display the answer until the audio is ready.\n\n\n```python\nfrom huggingface_hub import InferenceClient\n\nclient = InferenceClient(token=os.getenv(\"HF_TOKEN\"))\n\ndef generate_response(audio):\n gr.Info(\"Transcribing Audio\", duration=5)\n question = client.automatic_speech_recognition(audio).text\n\n messages = [{\"role\": \"system\", \"content\": (\"You are a magic 8 ball.\"\n \"Someone will present to you a situation or question and your job \"\n \"is to answer with a cryptic adage or proverb such as \"\n \"'curiosity killed the cat' or 'The early bird gets the worm'.\"\n \"Keep your answers short and do not include the phrase 'Magic 8 Ball' in your response. If the question does not make sense or is off-topic, say 'Foolish questions get foolish answers.'\"\n \"For example, 'Magic 8 Ball, should I get a dog?', 'A dog is ready for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages,", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages, max_tokens=64, seed=random.randint(1, 5000),\n model=\"mistralai/Mistral-7B-Instruct-v0.3\")\n\n response = response.choices[0].message.content.replace(\"Magic 8 Ball\", \"\").replace(\":\", \"\")\n return response, None, None\n```\n\n\nNow that we have our text response, we'll read it aloud with Parler TTS. The `read_response` function will be a python generator that yields the next chunk of audio as it's ready.\n\n\nWe'll be using the [Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) for the feature extraction but the [Jenny fine tuned version](https://huggingface.co/parler-tts/parler-tts-mini-jenny-30H) for the voice. This is so that the voice is consistent across generations.\n\n\nStreaming audio with transformers requires a custom Streamer class. You can see the implementation [here](https://huggingface.co/spaces/gradio/magic-8-ball/blob/main/streamer.py). Additionally, we'll convert the output to bytes so that it can be streamed faster from the backend. \n\n\n```python\nfrom streamer import ParlerTTSStreamer\nfrom transformers import AutoTokenizer, AutoFeatureExtractor, set_seed\nimport numpy as np\nimport spaces\nimport torch\nfrom threading import Thread\n\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"mps\" if torch.backends.mps.is_available() else \"cpu\"\ntorch_dtype = torch.float16 if device != \"cpu\" else torch.float32\n\nrepo_id = \"parler-tts/parler_tts_mini_v0.1\"\n\njenny_repo_id = \"ylacombe/parler-tts-mini-jenny-30H\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\n jenny_repo_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nf", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "sage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nframe_rate = model.audio_encoder.config.frame_rate\n\n@spaces.GPU\ndef read_response(answer):\n\n play_steps_in_s = 2.0\n play_steps = int(frame_rate * play_steps_in_s)\n\n description = \"Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality.\"\n description_tokens = tokenizer(description, return_tensors=\"pt\").to(device)\n\n streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)\n prompt = tokenizer(answer, return_tensors=\"pt\").to(device)\n\n generation_kwargs = dict(\n input_ids=description_tokens.input_ids,\n prompt_input_ids=prompt.input_ids,\n streamer=streamer,\n do_sample=True,\n temperature=1.0,\n min_new_tokens=10,\n )\n\n set_seed(42)\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\n thread.start()\n\n for new_audio in streamer:\n print(f\"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds\")\n yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)\n```\n\n", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!\n\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:\n\n```bash\nopencv-python\nfastrtc\nonnxruntime-gpu\n```\n\nWe'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.\n\nWe'll use OpenCV for image manipulation and the [WebRTC](https://webrtc.org/) protocol to achieve near-zero latency.\n\n**Note**: If you want to deploy this app on any cloud provider, you'll need to use your Hugging Face token to connect to a TURN server. Learn more in this [guide](https://fastrtc.org/deployment/). If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).\n\n", "heading1": "Setting up", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model. \n\nThe implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).\n\nWe're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.\n\n```python\nfrom huggingface_hub import hf_hub_download\nfrom inference import YOLOv10\n\nmodel_file = hf_hub_download(\n repo_id=\"onnx-community/yolov10n\", filename=\"onnx/model.onnx\"\n)\n\nmodel = YOLOv10(model_file)\n\ndef detection(image, conf_threshold=0.3):\n image = cv2.resize(image, (model.input_width, model.input_height))\n new_image = model.detect_objects(image, conf_threshold)\n return new_image\n```\n\nOur inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.\n\nThe function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The Gradio demo is straightforward, but we'll implement a few specific features:\n\n1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC. \n2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.\n3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next. \n\nWe'll also apply custom CSS to center the webcam and slider on the page.\n\n```python\nimport gradio as gr\nfrom fastrtc import WebRTC\n\ncss = \"\"\".my-group {max-width: 600px !important; max-height: 600px !important;}\n .my-column {display: flex !important; justify-content: center !important; align-items: center !important;}\"\"\"\n\nwith gr.Blocks(css=css) as demo:\n gr.HTML(\n \"\"\"\n

\n YOLOv10 Webcam Stream (Powered by WebRTC \u26a1\ufe0f)\n

\n \"\"\"\n )\n with gr.Column(elem_classes=[\"my-column\"]):\n with gr.Group(elem_classes=[\"my-group\"]):\n image = WebRTC(label=\"Stream\", rtc_configuration=rtc_configuration)\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n\n image.stream(\n fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n). \n\nYou can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [FastRTC GitHub repo](https://github.com/gradio-app/fastrtc) if you have any questions or encounter problems.", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "First, we'll install the following requirements in our system:\n\n```\nopencv-python\ntorch\ntransformers>=4.43.0\nspaces\n```\n\nThen, we'll download the model from the Hugging Face Hub:\n\n```python\nfrom transformers import RTDetrForObjectDetection, RTDetrImageProcessor\n\nimage_processor = RTDetrImageProcessor.from_pretrained(\"PekingU/rtdetr_r50vd\")\nmodel = RTDetrForObjectDetection.from_pretrained(\"PekingU/rtdetr_r50vd\").to(\"cuda\")\n```\nWe're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers). \n\n\n", "heading1": "Setting up the Model", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Our inference function will accept a video and a desired confidence threshold.\nObject detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the confidence threshold.\n\nOur function will iterate over the frames in the video and run the RT-DETR model over each frame.\nWe will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.\nThe function will yield each output video in chunks of two seconds.\n\nIn order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),\nwe will halve the original frames-per-second in the output video and resize the input frames to be half the original \nsize before running the model.\n\nThe code for the inference function is below - we'll go over it piece by piece.\n\n```python\nimport spaces\nimport cv2\nfrom PIL import Image\nimport torch\nimport time\nimport numpy as np\nimport uuid\n\nfrom draw_boxes import draw_bounding_boxes\n\nSUBSAMPLE = 2\n\n@spaces.GPU\ndef stream_object_detection(video, conf_threshold):\n cap = cv2.VideoCapture(video)\n\n This means we will output mp4 videos\n video_codec = cv2.VideoWriter_fourcc(*\"mp4v\") type: ignore\n fps = int(cap.get(cv2.CAP_PROP_FPS))\n\n desired_fps = fps // SUBSAMPLE\n width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2\n height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2\n\n iterating, frame = cap.read()\n\n n_frames = 0\n\n Use UUID to create a unique video file\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n\n Output Video\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n batch = []\n\n while iterating:\n frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batc", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": " frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batch) == 2 * desired_fps:\n inputs = image_processor(images=batch, return_tensors=\"pt\").to(\"cuda\")\n\n with torch.no_grad():\n outputs = model(**inputs)\n\n boxes = image_processor.post_process_object_detection(\n outputs,\n target_sizes=torch.tensor([(height, width)] * len(batch)),\n threshold=conf_threshold)\n \n for i, (array, box) in enumerate(zip(batch, boxes)):\n pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)\n frame = np.array(pil_image)\n Convert RGB to BGR\n frame = frame[:, :, ::-1].copy()\n output_video.write(frame)\n\n batch = []\n output_video.release()\n yield output_video_name\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n\n iterating, frame = cap.read()\n n_frames += 1\n```\n\n1. **Reading from the Video**\n\nOne of the industry standards for creating videos in python is OpenCV so we will use it in this app.\n\nThe `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.\n\nIn order to stream video in Gradio, we need to yield a different video file for each \"chunk\" of the output video.\nWe create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame i", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "dth, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing. \n\nWe take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second. \n\nWe run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.\n\nWe make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.\n\nOnce we have finished processing the batch, we create a new output video file for the next batch.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "The UI code is pretty similar to other kinds of Gradio apps. \nWe'll use a standard two-column layout so that users can see the input and output videos side by side.\n\nIn order for streaming to work, we have to set `streaming=True` in the output video. Setting the video\nto autoplay is not necessary but it's a better experience for users.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as app:\n gr.HTML(\n \"\"\"\n

\n Video Object Detection with RT-DETR\n

\n \"\"\")\n with gr.Row():\n with gr.Column():\n video = gr.Video(label=\"Video Source\")\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n with gr.Column():\n output_video = gr.Video(label=\"Processed Video\", streaming=True, autoplay=True)\n\n video.upload(\n fn=stream_object_detection,\n inputs=[video, conf_threshold],\n outputs=[output_video],\n )\n\n\n```\n\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection). \n\nIt is also embedded on this page below\n\n$demo_rt-detr-object-detection", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).\n\nUsing `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.\n\nThis tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:\n\n- Transformers (for this, `pip install torch transformers torchaudio`)\n\nMake sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.\n\nHere's how to build a real time speech recognition (ASR) app:\n\n1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)\n2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)\n3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.\n\nHere is the code to load `whisper` from Hugging Face `transformers`.\n\n```python\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")\n```\n\nThat's it!\n\n", "heading1": "1. Set up the Transformers ASR Model", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.\n\nWe will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.\n\n$code_asr\n$demo_asr\n\nThe `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.\n\n", "heading1": "2. Create a Full-Context ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "To make this a *streaming* demo, we need to make these changes:\n\n1. Set `streaming=True` in the `Audio` component\n2. Set `live=True` in the `Interface`\n3. Add a `state` to the interface to store the recorded audio of a user\n\nTip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.\n\nTake a look below.\n\n$code_stream_asr\n\nNotice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received. \n\n$demo_stream_asr\n\nNow the ASR model will run inference as you speak! \n", "heading1": "3. Create a Streaming ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "Modern voice applications should feel natural and responsive, moving beyond the traditional \"click-to-record\" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.\n\n> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).\n\nIn this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button. \n\nCreating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.\n\nThis tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.\n\n", "heading1": "Background", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "- **Gradio**: Provides the web interface and audio handling capabilities\n- **@ricky0123/vad-web**: Handles voice activity detection\n- **Groq**: Powers fast LLM inference for natural conversations\n- **Whisper**: Transcribes speech to text\n\nSetting Up the Environment\n\nFirst, let\u2019s install and import our essential libraries and set up a client for using the Groq API. Here\u2019s how to do it:\n\n`requirements.txt`\n```\ngradio\ngroq\nnumpy\nsoundfile\nlibrosa\nspaces\nxxhash\ndatasets\n```\n\n`app.py`\n```python\nimport groq\nimport gradio as gr\nimport soundfile as sf\nfrom dataclasses import dataclass, field\nimport os\n\nInitialize Groq client securely\napi_key = os.environ.get(\"GROQ_API_KEY\")\nif not api_key:\n raise ValueError(\"Please set the GROQ_API_KEY environment variable.\")\nclient = groq.Client(api_key=api_key)\n```\n\nHere, we\u2019re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We\u2019re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.\n\n---\n\nState Management for Seamless Conversations\n\nWe need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let\u2019s create an `AppState` class:\n\n```python\n@dataclass\nclass AppState:\n conversation: list = field(default_factory=list)\n stopped: bool = False\n model_outs: Any = None\n```\n\nOur `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session. \n\n---\n\nTranscribing Audio with Whisper on Groq\n\nNext, we\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meani", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "e\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meaningful speech in the input. Here\u2019s how:\n\n```python\ndef transcribe_audio(client, file_name):\n if file_name is None:\n return None\n\n try:\n with open(file_name, \"rb\") as audio_file:\n response = client.audio.transcriptions.with_raw_response.create(\n model=\"whisper-large-v3-turbo\",\n file=(\"audio.wav\", audio_file),\n response_format=\"verbose_json\",\n )\n completion = process_whisper_response(response.parse())\n return completion\n except Exception as e:\n print(f\"Error in transcription: {e}\")\n return f\"Error in transcription: {str(e)}\"\n```\n\nThis function opens the audio file and sends it to Groq\u2019s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn\u2019t fully crash if there\u2019s an issue with the API request. \n\n```python\ndef process_whisper_response(completion):\n \"\"\"\n Process Whisper transcription response and return text or null based on no_speech_prob\n \n Args:\n completion: Whisper transcription response object\n \n Returns:\n str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None\n \"\"\"\n if completion.segments and len(completion.segments) > 0:\n no_speech_prob = completion.segments[0].get('no_speech_prob', 0)\n print(\"No speech prob:\", no_speech_prob)\n\n if no_speech_prob > 0.7:\n return None\n \n return completion.text.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ext.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.\n\n\n---\n\nAdding Conversational Intelligence with LLM Integration\n\nOur chatbot needs to provide intelligent, friendly responses that flow naturally. We\u2019ll use a Groq-hosted Llama-3.2 for this:\n\n```python\ndef generate_chat_completion(client, history):\n messages = []\n messages.append(\n {\n \"role\": \"system\",\n \"content\": \"In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.\",\n }\n )\n\n for message in history:\n messages.append(message)\n\n try:\n completion = client.chat.completions.create(\n model=\"llama-3.2-11b-vision-preview\",\n messages=messages,\n )\n return completion.choices[0].message.content\n except Exception as e:\n return f\"Error in generating chat completion: {str(e)}\"\n```\n\nWe\u2019re defining a system prompt to guide the chatbot\u2019s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.\n\n---\n\nVoice Activity Detection for Hands-Free Interaction\n\nTo make our chatbot hands-free, we\u2019ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n scrip", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ly detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n script1.src = \"https://cdn.jsdelivr.net/npm/onnxruntime-web@1.14.0/dist/ort.js\";\n document.head.appendChild(script1)\n const script2 = document.createElement(\"script\");\n script2.onload = async () => {\n console.log(\"vad loaded\");\n var record = document.querySelector('.record-button');\n record.textContent = \"Just Start Talking!\"\n \n const myvad = await vad.MicVAD.new({\n onSpeechStart: () => {\n var record = document.querySelector('.record-button');\n var player = document.querySelector('streaming-out')\n if (record != null && (player == null || player.paused)) {\n record.click();\n }\n },\n onSpeechEnd: (audio) => {\n var stop = document.querySelector('.stop-button');\n if (stop != null) {\n stop.click();\n }\n }\n })\n myvad.start()\n }\n script2.src = \"https://cdn.jsdelivr.net/npm/@ricky0123/vad-web@0.0.7/dist/bundle.min.js\";\n}\n```\n\nThis script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording.\n\n---\n\nBuilding a User Interface with Gradio\n\nNow, let\u2019s create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n input_audio = gr.Audio(\n label=\"Input Audio\",\n sources=[\"microphone\"],\n type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\")\n state = g", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\")\n state = gr.State(value=AppState())\ndemo.launch(theme=theme, js=js)\n```\n\nIn this code block, we\u2019re using Gradio\u2019s `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch.\n\n---\n\nHandling Recording and Responses\n\nFinally, let\u2019s link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time.\n\n```python\n stream = input_audio.start_recording(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n )\n respond = input_audio.stop_recording(\n response, [state, input_audio], [state, chatbot]\n )\n```\n\nThese lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest.\n\n---\n\n", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "1. When you open the app, the VAD system automatically initializes and starts listening for speech\n2. As soon as you start talking, it triggers the recording automatically\n3. When you stop speaking, the recording ends and:\n - The audio is transcribed using Whisper\n - The transcribed text is sent to the LLM\n - The LLM generates a response about calorie tracking\n - The response is displayed in the chat interface\n4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content\n\nThis app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist.\n\nLink to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)", "heading1": "Summary", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).\n\n\nWe recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:\n\n```bash\npip install --upgrade gradio\n```\n\n\nTip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems are provided here. \n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:\n\n\n$code_hello_world_4\n\n\nTip: We shorten the imported name from gradio to gr. This is a widely adopted convention for better readability of code. \n\nNow, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.\n\nThe demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.\n\n$demo_hello_world_4\n\nType your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.\n\nTip: When developing locally, you can run your Gradio app in hot reload mode, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in gradio before the name of the file instead of python. In the example above, you would type: `gradio app.py` in your terminal. You can also enable vibe mode by using the --vibe flag, e.g. gradio --vibe app.py, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the Hot Reloading Guide.\n\n\n**Understanding the `Interface` Class**\n\nYou'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The num", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "turn one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.\n- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.\n\nThe `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.\n\nThe `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications. \n\nTip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`\"textbox\"`) or an instance of the class (`gr.Textbox()`).\n\nIf your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.\n\nWe'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).\n\n", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n \ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nWhen you run this code, a public URL will be generated for your demo in a matter of seconds, something like:\n\n\ud83d\udc49   `https://a23dsf231adb.gradio.live`\n\nNow, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.\n\nTo learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).\n\n\n", "heading1": "Sharing Your Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "So far, we've been discussing the `Interface` class, which is a high-level class that lets you build demos quickly with Gradio. But what else does Gradio include?\n\nCustom Demos with `gr.Blocks`\n\nGradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction \u2014 still all in Python. \n\nYou can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).\n\nChatbots with `gr.ChatInterface`\n\nGradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).\n\nThe Gradio Python & JavaScript Ecosystem\n\nThat's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:\n\n* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": ".app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.\n* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications \u2014 for free!\n\n", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).\n\nOr, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).", "heading1": "Gradio Sketch", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "Gradio demos can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nThis generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on), you don't have to worry about any packaging any dependencies.\n\n![sharing](https://github.com/gradio-app/gradio/blob/main/guides/assets/sharing.svg?raw=true)\n\n\nA share link usually looks something like this: **https://07ff8706ab.gradio.live**. Although the link is served through the Gradio Share Servers, these servers are only a proxy for your local server, and do not store any data sent through your app. Share links expire after 1 week. (it is [also possible to set up your own Share Server](https://github.com/huggingface/frp/) on your own cloud server to overcome this restriction.)\n\nTip: Keep in mind that share links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. Or you can [add authentication to your Gradio app](authentication) as discussed below.\n\nNote that by default, `share=False`, which means that your server is only running locally. (This is the default, except in Google Colab notebooks, where share links are automatically created). As an alternative to using share links, you can use use [SSH port-forwarding](https://www.ssh.com/ssh/tunneling/example) to share your local server with specific users.\n\n\n", "heading1": "Sharing Demos", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "If you'd like to have a permanent link to your Gradio demo on the internet, use Hugging Face Spaces. [Hugging Face Spaces](http://huggingface.co/spaces/) provides the infrastructure to permanently host your machine learning model for free!\n\nAfter you have [created a free Hugging Face account](https://huggingface.co/join), you have two methods to deploy your Gradio app to Hugging Face Spaces:\n\n1. From terminal: run `gradio deploy` in your app directory. The CLI will gather some basic metadata, upload all the files in the current directory (respecting any `.gitignore` file that may be present in the root of the directory), and then launch your app on Spaces. To update your Space, you can re-run this command or enable the Github Actions option in the CLI to automatically update the Spaces on `git push`.\n\n2. From your browser: Drag and drop a folder containing your Gradio model and all related files [here](https://huggingface.co/new-space). See [this guide how to host on Hugging Face Spaces](https://huggingface.co/blog/gradio-spaces) for more information, or watch the embedded video:\n\n\n\n", "heading1": "Hosting on HF Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can add a button to your Gradio app that creates a unique URL you can use to share your app and all components **as they currently are** with others. This is useful for sharing unique and interesting generations from your application , or for saving a snapshot of your app at a particular point in time.\n\nTo add a deep link button to your app, place the `gr.DeepLinkButton` component anywhere in your app.\nFor the URL to be accessible to others, your app must be available at a public URL. So be sure to host your app like Hugging Face Spaces or use the `share=True` parameter when launching your app.\n\nLet's see an example of how this works. Here's a simple Gradio chat ap that uses the `gr.DeepLinkButton` component. After a couple of messages, click the deep link button and paste it into a new browser tab to see the app as it is at that point in time.\n\n$code_deep_link\n$demo_deep_link\n\n\n", "heading1": "Sharing Deep Links", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Once you have hosted your app on Hugging Face Spaces (or on your own server), you may want to embed the demo on a different website, such as your blog or your portfolio. Embedding an interactive demo allows people to try out the machine learning model that you have built, without needing to download or install anything \u2014 right in their browser! The best part is that you can embed interactive demos even in static websites, such as GitHub pages.\n\nThere are two ways to embed your Gradio demos. You can find quick links to both options directly on the Hugging Face Space page, in the \"Embed this Space\" dropdown option:\n\n![Embed this Space dropdown option](https://github.com/gradio-app/gradio/blob/main/guides/assets/embed_this_space.png?raw=true)\n\nEmbedding with Web Components\n\nWeb components typically offer a better experience to users than IFrames. Web components load lazily, meaning that they won't slow down the loading time of your website, and they automatically adjust their height based on the size of the Gradio app.\n\nTo embed with Web Components:\n\n1. Import the gradio JS library into into your site by adding the script below in your site (replace {GRADIO_VERSION} in the URL with the library version of Gradio you are using).\n\n```html\n\n```\n\n2. Add\n\n```html\n\n```\n\nelement where you want to place the app. Set the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button. For example:\n\n```html\n\n```\n\n\n\nYou can see examples of h", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "=> {\n let v = obj.info.version;\n content = document.querySelector('.prose');\n content.innerHTML = content.innerHTML.replaceAll(\"{GRADIO_VERSION}\", v);\n});\n\n\nYou can see examples of how web components look on the Gradio landing page.\n\nYou can also customize the appearance and behavior of your web component with attributes that you pass into the `` tag:\n\n- `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed\n- `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.\n- `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `\"false\"`)\n- `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `\"300px\"`). Note that the final height is set based on the size of the Gradio app.\n- `container`: whether to show the border frame and information about where the Space is hosted (by default `\"true\"`)\n- `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `\"true\"`)\n- `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `\"false\"`)\n- `eager`: whether to load the Gradio app as soon as the page loads (by default `\"false\"`)\n- `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `\"system\"`)\n- `render`: an event that is triggered once the embedded space has finished rendering.\n\nHere's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.\n\n```html\n\n```\n\nHere's another example of how to use the `render` event. An event listener is used to capture the `render` event and will call the `handleLoadComplete()` function once rendering is complete.\n\n```html\n\n```\n\n_Note: While Gradio's CSS will never impact the embedding page, the embedding page can affect the style of the embedded Gradio app. Make sure that any CSS in the parent page isn't so general that it could also apply to the embedded Gradio app and cause the styling to break. Element selectors such as `header { ... }` and `footer { ... }` will be the most likely to cause issues._\n\nEmbedding with IFrames\n\nTo embed with IFrames instead (if you cannot add javascript to your website, for example), add this element:\n\n```html\n\n```\n\nAgain, you can find the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button.\n\nNote: if you use IFrames, you'll probably want to add a fixed `height` attribute and set `style=\"border:0;\"` to remove the border. In addition, if your app requires permissions such as access to the webcam or the microphone, you'll need to provide that as well using the `allow` attribute.\n\n", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a \"Use via API\" link.\n\n![Use via API](https://github.com/gradio-app/gradio/blob/main/guides/assets/use_via_api.png?raw=true)\n\nThis is a page that lists the endpoints that can be used to query the Gradio app, via our supported clients: either [the Python client](https://gradio.app/guides/getting-started-with-the-python-client/), or [the JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates the parameters and their types, as well as example inputs, like this.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe endpoints are automatically created when you launch a Gradio application. If you are using Gradio `Blocks`, you can also name each event listener, such as\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\nThis will add and document the endpoint `/addition/` to the automatically generated API page. Read more about the [API page here](./view-api-page).\n\n", "heading1": "API Page", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When a user makes a prediction to your app, you may need the underlying network request, in order to get the request headers (e.g. for advanced authentication), log the client's IP address, getting the query parameters, or for other reasons. Gradio supports this in a similar manner to FastAPI: simply add a function parameter whose type hint is `gr.Request` and Gradio will pass in the network request as that parameter. Here is an example:\n\n```python\nimport gradio as gr\n\ndef echo(text, request: gr.Request):\n if request:\n print(\"Request headers dictionary:\", request.headers)\n print(\"IP address:\", request.client.host)\n print(\"Query parameters:\", dict(request.query_params))\n return text\n\nio = gr.Interface(echo, \"textbox\", \"textbox\").launch()\n```\n\nNote: if your function is called directly instead of through the UI (this happens, for\nexample, when examples are cached, or when the Gradio app is called via API), then `request` will be `None`.\nYou should handle this case explicitly to ensure that your app does not throw any errors. That is why\nwe have the explicit check `if request`.\n\n", "heading1": "Accessing the Network Request Directly", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "In some cases, you might have an existing FastAPI app, and you'd like to add a path for a Gradio demo.\nYou can easily do this with `gradio.mount_gradio_app()`.\n\nHere's a complete example:\n\n$code_custom_path\n\nNote that this approach also allows you run your Gradio apps on custom paths (`http://localhost:8000/gradio` in the example above).\n\n\n", "heading1": "Mounting Within Another FastAPI App", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Password-protected app\n\nYou may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named \"admin\":\n\n```python\ndemo.launch(auth=(\"admin\", \"pass1234\"))\n```\n\nFor more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns `True` to allow access, `False` otherwise.\n\nHere's an example of a function that accepts any login where the username and password are the same:\n\n```python\ndef same_auth(username, password):\n return username == password\ndemo.launch(auth=same_auth)\n```\n\nIf you have multiple users, you may wish to customize the content that is shown depending on the user that is logged in. You can retrieve the logged in user by [accessing the network request directly](accessing-the-network-request-directly) as discussed above, and then reading the `.username` attribute of the request. Here's an example:\n\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Abubakar\", \"Abubakar\"), (\"Ali\", \"Ali\")])\n```\n\nNote: For authentication to work properly, third party cookies must be enabled in your browser. This is not the case by default for Safari or for Chrome Incognito Mode.\n\nIf users visit the `/logout` page of your Gradio app, they will automatically be logged out and session cookies deleted. This allows you to add logout functionality to your Gradio app as well. Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n logout_button = gr.Button(\"Logout\", link=\"/logout\")\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Pete\", \"Pete\"), (\"Dawood\", \"Dawood\")])\n```\nBy default, visiting `/logout` logs the user out from **all sessions** (e.g. if they are logged in from multiple browsers or devices, all will be signed out). If you want to log out only from the **current session**, add the query parameter `all_session=false` (i.e. `/logout?all_session=false`).\n\nNote: Gradio's built-in authentication provides a straightforward and basic layer of access control but does not offer robust security features for applications that require stringent access controls (e.g. multi-factor authentication, rate limiting, or automatic lockout policies).\n\nOAuth (Login via Hugging Face)\n\nGradio natively supports OAuth login via Hugging Face. In other words, you can easily add a _\"Sign in with Hugging Face\"_ button to your demo, which allows you to get a user's HF username as well as other information from their HF profile. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo) for a live demo.\n\nTo enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space\nas an OAuth application on Hugging Face. Next, you can use `gr.LoginButton` to add a login button to\nyour Gradio app. Once a user is logged in with their HF account, you can retrieve their profile by adding a parameter of type\n`gr.OAuthProfile` to any Gradio function. The user profile will be automatically injected as a parameter value. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must def", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "e. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must define which scopes you will use in your Space metadata\n(see [documentation](https://huggingface.co/docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your Space.\n\n
\n\n
\n\nUsers can revoke access to their profile at any time in their [settings](https://huggingface.co/settings/connected-applications).\n\nAs seen above, OAuth features are available only when your app runs in a Space. However, you often need to test your app\nlocally before deploying it. To test OAuth features locally, your machine must be logged in to Hugging Face. Please run `huggingface-cli login` or set `HF_TOKEN` as environment variable with one of your access token. You can generate a new token in your settings page (https://huggingface.co/settings/tokens). Then, clicking on the `gr.LoginButton` will log in to your local Hugging Face profile, allowing you to debug your app with your Hugging Face account before deploying it to a Space.\n\n**Security Note**: It is important to note that adding a `gr.LoginButton` does not restrict users from using your app, in the same way that adding [username-password authentication](/guides/sharing-your-apppassword-protected-app) does. This means that users of your app who have not logged in with Hugging Face can still access and run events in your Gradio app -- the difference is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth pr", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "erence is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth providers (e.g. Google OAuth) in your Gradio apps. To do this, first mount your Gradio app within a FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. This function should be passed to the `auth_dependency` parameter in `gr.mount_gradio_app`.\n\nSimilar to [FastAPI dependency functions](https://fastapi.tiangolo.com/tutorial/dependencies/), the function specified by `auth_dependency` will run before any Gradio-related route in your FastAPI app. The function should accept a single parameter: the FastAPI `Request` and return either a string (representing a user's username) or `None`. If a string is returned, the user will be able to access the Gradio-related routes in your FastAPI app.\n\nFirst, let's show a simplistic example to illustrate the `auth_dependency` parameter:\n\n```python\nfrom fastapi import FastAPI, Request\nimport gradio as gr\n\napp = FastAPI()\n\ndef get_user(request: Request):\n return request.headers.get(\"user\")\n\ndemo = gr.Interface(lambda s: f\"Hello {s}!\", \"textbox\", \"textbox\")\n\napp = gr.mount_gradio_app(app, demo, path=\"/demo\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nIn this example, only requests that include a \"user\" header will be allowed to access the Gradio app. Of course, this does not add much security, since any user can add this header in their request.\n\nHere's a more complete example showing how to add Google OAuth to a Gradio app (assuming you've already created OAuth Credentials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastA", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "entials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastAPI, Depends, Request\nfrom starlette.config import Config\nfrom starlette.responses import RedirectResponse\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_CLIENT_SECRET = \"...\"\nSECRET_KEY = \"...\"\n\nconfig_data = {'GOOGLE_CLIENT_ID': GOOGLE_CLIENT_ID, 'GOOGLE_CLIENT_SECRET': GOOGLE_CLIENT_SECRET}\nstarlette_config = Config(environ=config_data)\noauth = OAuth(starlette_config)\noauth.register(\n name='google',\n server_metadata_url='https://accounts.google.com/.well-known/openid-configuration',\n client_kwargs={'scope': 'openid email profile'},\n)\n\nSECRET_KEY = os.environ.get('SECRET_KEY') or \"a_very_secret_key\"\napp.add_middleware(SessionMiddleware, secret_key=SECRET_KEY)\n\nDependency to get the current user\ndef get_user(request: Request):\n user = request.session.get('user')\n if user:\n return user['name']\n return None\n\n@app.get('/')\ndef public(user: dict = Depends(get_user)):\n if user:\n return RedirectResponse(url='/gradio')\n else:\n return RedirectResponse(url='/login-demo')\n\n@app.route('/logout')\nasync def logout(request: Request):\n request.session.pop('user', None)\n return RedirectResponse(url='/')\n\n@app.route('/login')\nasync def login(request: Request):\n redirect_uri = request.url_for('auth')\n If your app is running on https, you should ensure that the\n `redirect_uri` is https, e.g. uncomment the following lines:\n \n from urllib.parse import urlparse, urlunparse\n redirect_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Reque", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "direct_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Request):\n try:\n access_token = await oauth.google.authorize_access_token(request)\n except OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(\"Login\", link=\"/login\")\n\napp = gr.mount_gradio_app(app, login_demo, path=\"/login-demo\")\n\ndef greet(request: gr.Request):\n return f\"Welcome to Gradio, {request.username}\"\n\nwith gr.Blocks() as main_demo:\n m = gr.Markdown(\"Welcome to Gradio!\")\n gr.Button(\"Logout\", link=\"/logout\")\n main_demo.load(greet, None, m)\n\napp = gr.mount_gradio_app(app, main_demo, path=\"/gradio\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nThere are actually two separate Gradio apps in this example! One that simply displays a log in button (this demo is accessible to any user), while the other main demo is only accessible to users that are logged in. You can try this example out on [this Space](https://huggingface.co/spaces/gradio/oauth-example).\n\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Gradio apps can function as MCP (Model Context Protocol) servers, allowing LLMs to use your app's functions as tools. By simply setting `mcp_server=True` in the `.launch()` method, Gradio automatically converts your app's functions into MCP tools that can be called by MCP clients like Claude Desktop, Cursor, or Cline. The server exposes tools based on your function names, docstrings, and type hints, and can handle file uploads, authentication headers, and progress updates. You can also create MCP-only functions using `gr.api` and expose resources and prompts using decorators. For a comprehensive guide on building MCP servers with Gradio, see [Building an MCP Server with Gradio](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Servers", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When publishing your app publicly, and making it available via API or via MCP server, you might want to set rate limits to prevent users from abusing your app. You can identify users using their IP address (using the `gr.Request` object [as discussed above](accessing-the-network-request-directly)) or, if they are logged in via Hugging Face OAuth, using their username. To see a complete example of how to set rate limits, please see [this Gradio app](https://github.com/gradio-app/gradio/blob/main/demo/rate_limit/run.py).\n\n", "heading1": "Rate Limits", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, Gradio collects certain analytics to help us better understand the usage of the `gradio` library. This includes the following information:\n\n* What environment the Gradio app is running on (e.g. Colab Notebook, Hugging Face Spaces)\n* What input/output components are being used in the Gradio app\n* Whether the Gradio app is utilizing certain advanced features, such as `auth` or `show_error`\n* The IP address which is used solely to measure the number of unique developers using Gradio\n* The version of Gradio that is running\n\nNo information is collected from _users_ of your Gradio app. If you'd like to disable analytics altogether, you can do so by setting the `analytics_enabled` parameter to `False` in `gr.Blocks`, `gr.Interface`, or `gr.ChatInterface`. Or, you can set the GRADIO_ANALYTICS_ENABLED environment variable to `\"False\"` to apply this to all Gradio apps created across your system.\n\n*Note*: this reflects the analytics policy as of `gradio>=4.32.0`.\n\n", "heading1": "Analytics", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "[Progressive Web Apps (PWAs)](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps) are web applications that are regular web pages or websites, but can appear to the user like installable platform-specific applications.\n\nGradio apps can be easily served as PWAs by setting the `pwa=True` parameter in the `launch()` method. Here's an example:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(pwa=True) Launch your app as a PWA\n```\n\nThis will generate a PWA that can be installed on your device. Here's how it looks:\n\n![Installing PWA](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/install-pwa.gif)\n\nWhen you specify `favicon_path` in the `launch()` method, the icon will be used as the app's icon. Here's an example:\n\n```python\ndemo.launch(pwa=True, favicon_path=\"./hf-logo.svg\") Use a custom icon for your PWA\n```\n\n![Custom PWA Icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/pwa-favicon.png)\n", "heading1": "Progressive Web App (PWA)", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Let's create a demo where a user can choose a filter to apply to their webcam stream. Users can choose from an edge-detection filter, a cartoon filter, or simply flipping the stream vertically.\n\n$code_streaming_filter\n$demo_streaming_filter\n\nYou will notice that if you change the filter value it will immediately take effect in the output stream. That is an important difference of stream events in comparison to other Gradio events. The input values of the stream can be changed while the stream is being processed. \n\nTip: We set the \"streaming\" parameter of the image output component to be \"True\". Doing so lets the server automatically convert our output images into base64 format, a format that is efficient for streaming.\n\n", "heading1": "A Realistic Image Demo", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For some image streaming demos, like the one above, we don't need to display separate input and output components. Our app would look cleaner if we could just display the modified output stream.\n\nWe can do so by just specifying the input image component as the output of the stream event.\n\n$code_streaming_filter_unified\n$demo_streaming_filter_unified\n\n", "heading1": "Unified Image Demos", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Your streaming function should be stateless. It should take the current input and return its corresponding output. However, there are cases where you may want to keep track of past inputs or outputs. For example, you may want to keep a buffer of the previous `k` inputs to improve the accuracy of your transcription demo. You can do this with Gradio's `gr.State()` component.\n\nLet's showcase this with a sample demo:\n\n```python\ndef transcribe_handler(current_audio, state, transcript):\n next_text = transcribe(current_audio, history=state)\n state.append(current_audio)\n state = state[-3:]\n return state, transcript + next_text\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n mic = gr.Audio(sources=\"microphone\")\n state = gr.State(value=[])\n with gr.Column():\n transcript = gr.Textbox(label=\"Transcript\")\n mic.stream(transcribe_handler, [mic, state, transcript], [state, transcript],\n time_limit=10, stream_every=1)\n\n\ndemo.launch()\n```\n\n", "heading1": "Keeping track of past inputs or outputs", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For an end-to-end example of streaming from the webcam, see the object detection from webcam [guide](/main/guides/object-detection-from-webcam-with-webrtc).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted.\n\nYou can control the deletion behavior further with the following two parameters of `gr.State`:\n\n1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory.\n2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.\n\n", "heading1": "Automatic deletion of `gr.State`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral \ud83d\ude09).\n\nGradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`. \nThis parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds.\nEvery `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created. \nFor example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old.\nAdditionally, the cache will be deleted entirely when the server restarts.\n\n", "heading1": "Automatic cache cleanup via `delete_cache`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay).\nUnlike other gradio events, this event does not accept inputs or outptus.\nYou can think of the `unload` event as the opposite of the `load` event.\n\n", "heading1": "The `unload` event", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user.\nAs the user interacts with the app, images are saved to disk in that special directory.\nWhen the user closes the page, the images created in that session are deleted via the `unload` event.\nThe state and files in the cache are cleaned up automatically as well.\n\n$code_state_cleanup\n$demo_state_cleanup", "heading1": "Putting it all together", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Gradio can stream audio and video directly from your generator function.\nThis lets your user hear your audio or see your video nearly as soon as it's `yielded` by your function.\nAll you have to do is \n\n1. Set `streaming=True` in your `gr.Audio` or `gr.Video` output component.\n2. Write a python generator that yields the next \"chunk\" of audio or video.\n3. Set `autoplay=True` so that the media starts playing automatically.\n\nFor audio, the next \"chunk\" can be either an `.mp3` or `.wav` file or a `bytes` sequence of audio.\nFor video, the next \"chunk\" has to be either `.mp4` file or a file with `h.264` codec with a `.ts` extension.\nFor smooth playback, make sure chunks are consistent lengths and larger than 1 second.\n\nWe'll finish with some simple examples illustrating these points.\n\nStreaming Audio\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(audio_file):\n for _ in range(10):\n sleep(0.5)\n yield audio_file\n\ngr.Interface(keep_repeating,\n gr.Audio(sources=[\"microphone\"], type=\"filepath\"),\n gr.Audio(streaming=True, autoplay=True)\n).launch()\n```\n\nStreaming Video\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(video_file):\n for _ in range(10):\n sleep(0.5)\n yield video_file\n\ngr.Interface(keep_repeating,\n gr.Video(sources=[\"webcam\"], format=\"mp4\"),\n gr.Video(streaming=True, autoplay=True)\n).launch()\n```\n\n", "heading1": "Streaming Media", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "For an end-to-end example of streaming media, see the object detection from video [guide](/main/guides/object-detection-from-video) or the streaming AI-generated audio with [transformers](https://huggingface.co/docs/transformers/index) [guide](/main/guides/streaming-ai-generated-audio).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "By default, each event listener has its own queue, which handles one request at a time. This can be configured via two arguments:\n\n- `concurrency_limit`: This sets the maximum number of concurrent executions for an event listener. By default, the limit is 1 unless configured otherwise in `Blocks.queue()`. You can also set it to `None` for no limit (i.e., an unlimited number of concurrent executions). For example:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn = gr.Button(\"Generate Image\")\n generate_btn.click(image_gen, prompt, image, concurrency_limit=5)\n```\n\nIn the code above, up to 5 requests can be processed simultaneously for this event listener. Additional requests will be queued until a slot becomes available.\n\nIf you want to manage multiple event listeners using a shared queue, you can use the `concurrency_id` argument:\n\n- `concurrency_id`: This allows event listeners to share a queue by assigning them the same ID. For example, if your setup has only 2 GPUs but multiple functions require GPU access, you can create a shared queue for all those functions. Here's how that might look:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn_1 = gr.Button(\"Generate Image via model 1\")\n generate_btn_2 = gr.Button(\"Generate Image via model 2\")\n generate_btn_3 = gr.Button(\"Generate Image via model 3\")\n generate_btn_1.click(image_gen_1, prompt, image, concurrency_limit=2, concurrency_id=\"gpu_queue\")\n generate_btn_2.click(image_gen_2, prompt, image, concurrency_id=\"gpu_queue\")\n generate_btn_3.click(image_gen_3, prompt, image, concurrency_id=\"gpu_queue\")\n```\n\nIn this example, all three event listeners share a queue identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, se", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": " identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, set `concurrency_limit=None`. This is useful if your function is calling e.g. an external API which handles the rate limiting of requests itself.\n- The default concurrency limit for all queues can be set globally using the `default_concurrency_limit` parameter in `Blocks.queue()`. \n\nThese configurations make it easy to manage the queuing behavior of your Gradio app.\n", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": "You can initialize the `I18n` class with multiple language dictionaries to add custom translations:\n\n```python\nimport gradio as gr\n\nCreate an I18n instance with translations for multiple languages\ni18n = gr.I18n(\n en={\"greeting\": \"Hello, welcome to my app!\", \"submit\": \"Submit\"},\n es={\"greeting\": \"\u00a1Hola, bienvenido a mi aplicaci\u00f3n!\", \"submit\": \"Enviar\"},\n fr={\"greeting\": \"Bonjour, bienvenue dans mon application!\", \"submit\": \"Soumettre\"}\n)\n\nwith gr.Blocks() as demo:\n Use the i18n method to translate the greeting\n gr.Markdown(i18n(\"greeting\"))\n with gr.Row():\n input_text = gr.Textbox(label=\"Input\")\n output_text = gr.Textbox(label=\"Output\")\n \n submit_btn = gr.Button(i18n(\"submit\"))\n\nPass the i18n instance to the launch method\ndemo.launch(i18n=i18n)\n```\n\n", "heading1": "Setting Up Translations", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When you use the `i18n` instance with a translation key, Gradio will show the corresponding translation to users based on their browser's language settings or the language they've selected in your app.\n\nIf a translation isn't available for the user's locale, the system will fall back to English (if available) or display the key itself.\n\n", "heading1": "How It Works", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "Locale codes should follow the BCP 47 format (e.g., 'en', 'en-US', 'zh-CN'). The `I18n` class will warn you if you use an invalid locale code.\n\n", "heading1": "Valid Locale Codes", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "The following component properties typically support internationalization:\n\n- `description`\n- `info`\n- `title`\n- `placeholder`\n- `value`\n- `label`\n\nNote that support may vary depending on the component, and some properties might have exceptions where internationalization is not applicable. You can check this by referring to the typehint for the parameter and if it contains `I18nData`, then it supports internationalization.", "heading1": "Supported Component Properties", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "Client side functions are ideal for updating component properties (like visibility, placeholders, interactive state, or styling). \n\nHere's a basic example:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row() as row:\n btn = gr.Button(\"Hide this row\")\n \n This function runs in the browser without a server roundtrip\n btn.click(\n lambda: gr.Row(visible=False), \n None, \n row, \n js=True\n )\n\ndemo.launch()\n```\n\n\n", "heading1": "When to Use Client Side Functions", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Client side functions have some important restrictions:\n* They can only update component properties (not values)\n* They cannot take any inputs\n\nHere are some functions that will work with `js=True`:\n\n```py\nSimple property updates\nlambda: gr.Textbox(lines=4)\n\nMultiple component updates\nlambda: [gr.Textbox(lines=4), gr.Button(interactive=False)]\n\nUsing gr.update() for property changes\nlambda: gr.update(visible=True, interactive=False)\n```\n\nWe are working to increase the space of functions that can be transpiled to JavaScript so that they can be run in the browser. [Follow the Groovy library for more info](https://github.com/abidlabs/groovy-transpiler).\n\n\n", "heading1": "Limitations", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Here's a more complete example showing how client side functions can improve the user experience:\n\n$code_todo_list_js\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "When you set `js=True`, Gradio:\n\n1. Transpiles your Python function to JavaScript\n\n2. Runs the function directly in the browser\n\n3. Still sends the request to the server (for consistency and to handle any side effects)\n\nThis provides immediate visual feedback while ensuring your application state remains consistent.\n", "heading1": "Behind the Scenes", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "By default, Gradio automatically generates a navigation bar for multipage apps that displays all your pages with \"Home\" as the title for the main page. You can customize the navbar behavior using the `gr.Navbar` component.\n\nPer-Page Navbar Configuration\n\nYou can have different navbar configurations for each page of your app:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n Navbar for the main page\n navbar = gr.Navbar(\n visible=True,\n main_page_name=\"Dashboard\",\n value=[(\"About\", \"https://example.com/about\")]\n )\n \n gr.Textbox(label=\"Main page content\")\n\nwith demo.route(\"Settings\"):\n Different navbar for the Settings page\n navbar = gr.Navbar(\n visible=True,\n main_page_name=\"Home\",\n value=[(\"Documentation\", \"https://docs.example.com\")]\n )\n gr.Textbox(label=\"Settings page\")\n\ndemo.launch()\n```\n\n\n**Important Notes:**\n- You can have one `gr.Navbar` component per page. Each page's navbar configuration is independent.\n- The `main_page_name` parameter customizes the title of the home page link in the navbar.\n- The `value` parameter allows you to add additional links to the navbar, which can be internal pages or external URLs.\n- If no `gr.Navbar` component is present on a page, the default navbar behavior is used (visible with \"Home\" as the home page title).\n- You can update the navbar properties using standard Gradio event handling, just like with any other component.\n\nHere's an example that demonstrates the last point:\n\n$code_navbar_customization\n\n", "heading1": "Customizing the Navbar", "source_page_url": "https://gradio.app/guides/multipage-apps", "source_page_title": "Additional Features - Multipage Apps Guide"}, {"text": "1. `GRADIO_SERVER_PORT`\n\n- **Description**: Specifies the port on which the Gradio app will run.\n- **Default**: `7860`\n- **Example**:\n ```bash\n export GRADIO_SERVER_PORT=8000\n ```\n\n2. `GRADIO_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `\"0.0.0.0\"`\n- **Default**: `\"127.0.0.1\"` \n- **Example**:\n ```bash\n export GRADIO_SERVER_NAME=\"0.0.0.0\"\n ```\n\n3. `GRADIO_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio server.\n- **Default**: `100`\n- **Example**:\n ```bash\n export GRADIO_NUM_PORTS=200\n ```\n\n4. `GRADIO_ANALYTICS_ENABLED`\n\n- **Description**: Whether Gradio should provide \n- **Default**: `\"True\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_ANALYTICS_ENABLED=\"True\"\n ```\n\n5. `GRADIO_DEBUG`\n\n- **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab.\n- **Default**: `0`\n- **Example**:\n ```sh\n export GRADIO_DEBUG=1\n ```\n\n6. `GRADIO_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details.\n- **Default**: `\"manual\"`\n- **Options**: `\"never\"`, `\"manual\"`, `\"auto\"`\n- **Example**:\n ```sh\n export GRADIO_FLAGGING_MODE=\"never\"\n ```\n\n7. `GRADIO_TEMP_DIR`\n\n- **Description**: Specifies the directory where temporary files created by Gradio are stored.\n- **Default**: System default temporary directory\n- **Example**:\n ```sh\n export GRADIO_TEMP_DIR=\"/path/to/temp\"\n ```\n\n8. `GRADIO_ROOT_PATH`\n\n- **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "r the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=\"/myapp\"\n ```\n\n9. `GRADIO_SHARE`\n\n- **Description**: Enables or disables sharing the Gradio app.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SHARE=\"True\"\n ```\n\n10. `GRADIO_ALLOWED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ALLOWED_PATHS=\"/mnt/sda1,/mnt/sda2\"\n ```\n\n11. `GRADIO_BLOCKED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_BLOCKED_PATHS=\"/users/x/gradio_app/admin,/users/x/gradio_app/keys\"\n ```\n\n12. `FORWARDED_ALLOW_IPS`\n\n- **Description**: This is not a Gradio-specific environment variable, but rather one used in server configurations, specifically `uvicorn` which is used by Gradio internally. This environment variable is useful when deploying applications behind a reverse proxy. It defines a list of IP addresses that are trusted to forward traffic to your application. When set, the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [objec", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": " the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [object's](https://www.gradio.app/docs/gradio/request) `client.host` property, it will correctly get the user's IP address instead of the IP address of the reverse proxy server. Note that only trusted IP addresses (i.e. the IP addresses of your reverse proxy servers) should be added, as any server with these IP addresses can modify the `X-Forwarded-For` header and spoof the client's IP address.\n- **Default**: `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export FORWARDED_ALLOW_IPS=\"127.0.0.1,192.168.1.100\"\n ```\n\n13. `GRADIO_CACHE_EXAMPLES`\n\n- **Description**: Whether or not to cache examples by default in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()` when no explicit argument is passed for the `cache_examples` parameter. You can set this environment variable to either the string \"true\" or \"false\".\n- **Default**: `\"false\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_EXAMPLES=\"true\"\n ```\n\n\n14. `GRADIO_CACHE_MODE`\n\n- **Description**: How to cache examples. Only applies if `cache_examples` is set to `True` either via enviornment variable or by an explicit parameter, AND no no explicit argument is passed for the `cache_mode` parameter in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. Can be set to either the strings \"lazy\" or \"eager.\" If \"lazy\", examples are cached after their first use for all users of the app. If \"eager\", all examples are cached at app launch.\n\n- **Default**: `\"eager\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_MODE=\"lazy\"\n ```\n\n\n15. `GRADIO_EXAMPLES_CACHE`\n\n- **Description**: If you set `cache_examples=True` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory.\n- **Default**: `\".gradio/cached_examples/\"`\n- **Example**:\n ```sh\n export GRADIO_EXAMPLES_CACHE=\"custom_cached_examples/\"\n ```\n\n\n16. `GRADIO_SSR_MODE`\n\n- **Description**: Controls whether server-side rendering (SSR) is enabled. When enabled, the initial HTML is rendered on the server rather than the client, which can improve initial page load performance and SEO.\n\n- **Default**: `\"False\"` (except on Hugging Face Spaces, where this environment variable sets it to `True`)\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SSR_MODE=\"True\"\n ```\n\n17. `GRADIO_NODE_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `GRADIO_SERVER_NAME` if it is set, otherwise `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export GRADIO_NODE_SERVER_NAME=\"0.0.0.0\"\n ```\n\n18. `GRADIO_NODE_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `100`\n- **Example**:\n ```sh\n export GRADIO_NODE_NUM_PORTS=200\n ```\n\n19. `GRADIO_RESET_EXAMPLES_CACHE`\n\n- **Description**: If set to \"True\", Gradio will delete and recreate the examples cache directory when the app starts instead of reusing the cached example if they already exist. \n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag messages in `gr.ChatInterface` applications. Similar to `GRADIO_FLAGGING_MODE` but specifically for chat interfaces.\n- **Default**: `\"never\"`\n- **Options**: `\"never\"`, `\"manual\"`\n- **Example**:\n ```sh\n export GRADIO_CHAT_FLAGGING_MODE=\"manual\"\n ```\n\n21. `GRADIO_WATCH_DIRS`\n\n- **Description**: Specifies directories to watch for file changes when running Gradio in development mode. When files in these directories change, the Gradio app will automatically reload. Multiple directories can be specified by separating them with commas. This is primarily used by the `gradio` CLI command for development workflows.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_WATCH_DIRS=\"/path/to/src,/path/to/templates\"\n ```\n\n22. `GRADIO_VIBE_MODE`\n\n- **Description**: Enables the Vibe editor mode, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. When enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use with extreme caution in production environments.\n- **Default**: `\"\"`\n- **Options**: Any non-empty string enables the mode\n- **Example**:\n ```sh\n export GRADIO_VIBE_MODE=\"1\"\n ```\n\n23. `GRADIO_MCP_SERVER`\n\n- **Description**: Enables the MCP (Model Context Protocol) server functionality in Gradio. When enabled, the Gradio app will be set up as an MCP server and documented functions will be added as MCP tools that can be used by LLMs. This allows LLMs to interact with your Gradio app's functionality through the MCP protocol.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_MCP_SERVER=\"True\"\n ```\n\n\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "*Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_MCP_SERVER=\"True\"\n ```\n\n\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example:\n\n```sh\nexport GRADIO_SERVER_PORT=8000\n```\n\nIf you're using a `.env` file to manage your environment variables, you can add them like this:\n\n```sh\nGRADIO_SERVER_PORT=8000\nGRADIO_SERVER_NAME=\"localhost\"\n```\n\nThen, use a tool like `dotenv` to load these variables when running your application.\n\n\n\n", "heading1": "How to Set Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "**API endpoint names**\n\nWhen you create a Gradio application, the API endpoint names are automatically generated based on the function names. You can change this by using the `api_name` parameter in `gr.Interface` or `gr.ChatInterface`. If you are using Gradio `Blocks`, you can name each event listener, like this:\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\n**Controlling API endpoint visibility**\n\nWhen building a complex Gradio app, you might want to control how API endpoints appear or behave. Use the `api_visibility` parameter in any `Blocks` event listener to control this:\n\n- `\"public\"` (default): The endpoint is shown in API docs and accessible to all\n- `\"undocumented\"`: The endpoint is hidden from API docs but still accessible to downstream apps\n- `\"private\"`: The endpoint is completely disabled and inaccessible\n\nTo hide an API endpoint from the documentation while still allowing programmatic access:\n\n```python\nbtn.click(add, [num1, num2], output, api_visibility=\"undocumented\")\n```\n\n**Disabling API endpoints**\n\nIf you want to disable an API endpoint altogether so that no one can access it programmatically, set `api_visibility=\"private\"`:\n\n```python\nbtn.click(add, [num1, num2], output, api_visibility=\"private\")\n```\n\nNote: setting `api_visibility=\"private\"` also means that downstream apps will not be able to load your Gradio app using `gr.load()` as this function uses the Gradio API under the hood.\n\n**Adding API endpoints**\n\nYou can also add new API routes to your Gradio application that do not correspond to events in your UI.\n\nFor example, in this Gradio application, we add a new route that adds numbers and slices a list:\n\n```py\nimport gradio as gr\nwith gr.Blocks() as demo:\n with gr.Row():\n input = gr.Textbox()\n button = gr.Button(\"Submit\")\n output = gr.Textbox()\n def fn(a: int, b: int, c: list[str]) -> tuple[int, str]:\n return a + b, c[a:b]\n gr.api(fn, api_name=\"add_and_slice\")\n\n_, url, _ = demo.laun", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": " gr.Button(\"Submit\")\n output = gr.Textbox()\n def fn(a: int, b: int, c: list[str]) -> tuple[int, str]:\n return a + b, c[a:b]\n gr.api(fn, api_name=\"add_and_slice\")\n\n_, url, _ = demo.launch()\n```\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(url)\nresult = client.predict(\n a=3,\n b=5,\n c=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n api_name=\"/add_and_slice\"\n)\nprint(result)\n```\n\n", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "This API page not only lists all of the endpoints that can be used to query the Gradio app, but also shows the usage of both [the Gradio Python client](https://gradio.app/guides/getting-started-with-the-python-client/), and [the Gradio JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). \n\nFor each endpoint, Gradio automatically generates a complete code snippet with the parameters and their types, as well as example inputs, allowing you to immediately test an endpoint. Here's an example showing an image file input and `str` output:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-snippet.png)\n\n\n", "heading1": "The Clients", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Instead of reading through the view API page, you can also use Gradio's built-in API recorder to generate the relevant code snippet. Simply click on the \"API Recorder\" button, use your Gradio app via the UI as you would normally, and then the API Recorder will generate the code using the Clients to recreate your all of your interactions programmatically.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api-recorder.gif)\n\n", "heading1": "The API Recorder \ud83e\ude84", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "The API page also includes instructions on how to use the Gradio app as an Model Context Protocol (MCP) server, which is a standardized way to expose functions as tools so that they can be used by LLMs. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\nFor the MCP sever, each tool, its description, and its parameters are listed, along with instructions on how to integrate with popular MCP Clients. Read more about Gradio's [MCP integration here](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Server", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "You can access the complete OpenAPI (formerly Swagger) specification of your Gradio app's API at the endpoint `/gradio_api/openapi.json`. The OpenAPI specification is a standardized, language-agnostic interface description for REST APIs that enables both humans and computers to discover and understand the capabilities of your service.\n", "heading1": "OpenAPI Specification", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "To add custom buttons to a component, pass a list of `gr.Button()` instances to the `buttons` parameter:\n\n```python\nimport gradio as gr\n\nrefresh_btn = gr.Button(\"Refresh\", variant=\"secondary\", size=\"sm\")\nclear_btn = gr.Button(\"Clear\", variant=\"secondary\", size=\"sm\")\n\ntextbox = gr.Textbox(\n value=\"Sample text\",\n label=\"Text Input\",\n buttons=[refresh_btn, clear_btn]\n)\n```\n\nYou can also mix built-in buttons (as strings) with custom buttons:\n\n```python\ncode = gr.Code(\n value=\"print('Hello')\",\n language=\"python\",\n buttons=[\"copy\", \"download\", refresh_btn, export_btn]\n)\n```\n\n", "heading1": "Basic Usage", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "Custom buttons work just like regular `gr.Button` components. You can connect them to Python functions or JavaScript functions using the `.click()` method:\n\nPython Functions\n\n```python\ndef refresh_data():\n import random\n return f\"Refreshed: {random.randint(1000, 9999)}\"\n\nrefresh_btn.click(refresh_data, outputs=textbox)\n```\n\nJavaScript Functions\n\n```python\nclear_btn.click(\n None,\n inputs=[],\n outputs=textbox,\n js=\"() => ''\"\n)\n```\n\nCombined Python and JavaScript\n\nYou can use the same button for both Python and JavaScript logic:\n\n```python\nalert_btn.click(\n None,\n inputs=textbox,\n outputs=[],\n js=\"(text) => { alert('Text: ' + text); return []; }\"\n)\n```\n\n", "heading1": "Connecting Button Events", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "Here's a complete example showing custom buttons with both Python and JavaScript functions:\n\n$code_textbox_custom_buttons\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "- Custom buttons appear in the component's toolbar, typically in the top-right corner\n- Only the `value` of the Button is used, other attributes like `icon` are not used.\n- Buttons are rendered in the order they appear in the `buttons` list\n- Built-in buttons (like \"copy\", \"download\") can be hidden by omitting them from the list\n- Custom buttons work with component events in the same way as as regular buttons\n", "heading1": "Notes", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "- **1. Static files**. You can designate static files or directories using the `gr.set_static_paths` function. Static files are not be copied to the Gradio cache (see below) and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications as any static files are accessible to all useres of your Gradio app.\n\n- **2. Files in the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).\n\n- **3. Files in Gradio's cache**. After you launch your Gradio app, Gradio copies certain files into a temporary cache and makes these files accessible to users. Let's unpack this in more detail below.\n\n\n", "heading1": "Files Gradio allows users to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "First, it's important to understand why Gradio has a cache at all. Gradio copies files to a cache directory before returning them to the frontend. This prevents files from being overwritten by one user while they are still needed by another user of your application. For example, if your prediction function returns a video file, then Gradio will move that video to the cache after your prediction function runs and returns a URL the frontend can use to show the video. Any file in the cache is available via URL to all users of your running application.\n\nTip: You can customize the location of the cache by setting the `GRADIO_TEMP_DIR` environment variable to an absolute path, such as `/home/usr/scripts/project/temp/`. \n\nFiles Gradio moves to the cache\n\nGradio moves three kinds of files into the cache\n\n1. Files specified by the developer before runtime, e.g. cached examples, default values of components, or files passed into parameters such as the `avatar_images` of `gr.Chatbot`\n\n2. File paths returned by a prediction function in your Gradio application, if they ALSO meet one of the conditions below:\n\n* It is in the `allowed_paths` parameter of the `Blocks.launch` method.\n* It is in the current working directory of the python interpreter.\n* It is in the temp directory obtained by `tempfile.gettempdir()`.\n\n**Note:** files in the current working directory whose name starts with a period (`.`) will not be moved to the cache, even if they are returned from a prediction function, since they often contain sensitive information. \n\nIf none of these criteria are met, the prediction function that is returning that file will raise an exception instead of moving the file to cache. Gradio performs this check so that arbitrary files on your machine cannot be accessed.\n\n3. Files uploaded by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` p", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "d by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` parameter.\n\n", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "While running, Gradio apps will NOT ALLOW users to access:\n\n- **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default, or by the `allowed_paths` parameter or the `gr.set_static_paths` function.\n\n- **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.\n\n", "heading1": "The files Gradio will not allow others to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file.\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(lambda x: x, \"image\", \"image\")\n\ndemo.launch(max_file_size=\"5mb\")\nor\ndemo.launch(max_file_size=5 * gr.FileSize.MB)\n```\n\n", "heading1": "Uploading Files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "* Set a `max_file_size` for your application.\n* Do not return arbitrary user input from a function that is connected to a file-based output component (`gr.Image`, `gr.File`, etc.). For example, the following interface would allow anyone to move an arbitrary file in your local directory to the cache: `gr.Interface(lambda s: s, \"text\", \"file\")`. This is because the user input is treated as an arbitrary file path. \n* Make `allowed_paths` as small as possible. If a path in `allowed_paths` is a directory, any file within that directory can be accessed. Make sure the entires of `allowed_paths` only contains files related to your application.\n* Run your gradio application from the same directory the application file is located in. This will narrow the scope of files Gradio will be allowed to move into the cache. For example, prefer `python app.py` to `python Users/sources/project/app.py`.\n\n\n", "heading1": "Best Practices", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Both `gr.set_static_paths` and the `allowed_paths` parameter in launch expect absolute paths. Below is a minimal example to display a local `.png` image file in an HTML block.\n\n```txt\n\u251c\u2500\u2500 assets\n\u2502 \u2514\u2500\u2500 logo.png\n\u2514\u2500\u2500 app.py\n```\nFor the example directory structure, `logo.png` and any other files in the `assets` folder can be accessed from your Gradio app in `app.py` as follows:\n\n```python\nfrom pathlib import Path\n\nimport gradio as gr\n\ngr.set_static_paths(paths=[Path.cwd().absolute()/\"assets\"])\n\nwith gr.Blocks() as demo:\n gr.HTML(\"\")\n\ndemo.launch()\n```\n", "heading1": "Example: Accessing local files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.\n\nSuch models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.\n\nLet's get started!\n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/image-classification-in-pytorch", "source_page_title": "Other Tutorials - Image Classification In Pytorch Guide"}, {"text": "First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.\n\n```python\nimport torch\n\nmodel = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()\n```\n\nBecause we will be using the model for inference, we have called the `.eval()` method.\n\n", "heading1": "Step 1 \u2014 Setting up the Image Classification Model", "source_page_url": "https://gradio.app/guides/image-classification-in-pytorch", "source_page_title": "Other Tutorials - Image Classification In Pytorch Guide"}, {"text": "Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).\n\nIn the case of our pretrained model, it will look like this:\n\n```python\nimport requests\nfrom PIL import Image\nfrom torchvision import transforms\n\nDownload human-readable labels for ImageNet.\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\ndef predict(inp):\n inp = transforms.ToTensor()(inp).unsqueeze(0)\n with torch.no_grad():\n prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)\n confidences = {labels[i]: float(prediction[i]) for i in range(1000)}\n return confidences\n```\n\nLet's break this down. The function takes one parameter:\n\n- `inp`: the input image as a `PIL` image\n\nThen, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:\n\n- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities\n\n", "heading1": "Step 2 \u2014 Defining a `predict` function", "source_page_url": "https://gradio.app/guides/image-classification-in-pytorch", "source_page_title": "Other Tutorials - Image Classification In Pytorch Guide"}, {"text": "Now that we have our predictive function set up, we can create a Gradio Interface around it.\n\nIn this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type=\"pil\")` which creates the component and handles the preprocessing to convert that to a `PIL` image.\n\nThe output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.\n\nFinally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:\n\n```python\nimport gradio as gr\n\ngr.Interface(fn=predict,\n inputs=gr.Image(type=\"pil\"),\n outputs=gr.Label(num_top_classes=3),\n examples=[\"lion.jpg\", \"cheetah.jpg\"]).launch()\n```\n\nThis produces the following interface, which you can try right here in your browser (try uploading your own examples!):\n\n\n\n\n---\n\nAnd you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!\n", "heading1": "Step 3 \u2014 Creating a Gradio Interface", "source_page_url": "https://gradio.app/guides/image-classification-in-pytorch", "source_page_title": "Other Tutorials - Image Classification In Pytorch Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces\n- How to setup a Gradio demo for EfficientNet-Lite4\n- How to contribute your own Gradio demos for the ONNX organization on Hugging Face\n\nHere's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.\n\nThe [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.\n\n", "heading1": "What is the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\nHugging Face Models\n\nHugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).\n\n", "heading1": "How did Hugging Face help the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.\n\nONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).\n\n", "heading1": "What is the role of ONNX Runtime?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)\n\nHere we walk through setting up a example demo for EfficientNet-Lite4 using Gradio\n\nFirst we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.\n\n```python\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\nimport cv2\nimport json\nimport gradio as gr\nfrom huggingface_hub import hf_hub_download\nfrom onnx import hub\nimport onnxruntime as ort\n\nloads ONNX model from ONNX Model Zoo\nmodel = hub.load(\"efficientnet-lite4\")\nloads the labels text file\nlabels = json.load(open(\"labels_map.txt\", \"r\"))\n\nsets image file dimensions to 224x224 by resizing and cropping image from center\ndef pre_process_edgetpu(img, dims):\n output_height, output_width, _ = dims\n img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)\n img = center_crop(img, output_height, output_width)\n img = np.asarray(img, dtype='float32')\n converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]\n img -= [127.0, 127.0, 127.0]\n img /= [128.0, 128.0, 128.0]\n return img\n\nresizes the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_he", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_height / scale)\n new_width = int(100. * out_width / scale)\n if height > width:\n w = new_width\n h = int(new_height * height / width)\n else:\n h = new_height\n w = int(new_width * width / height)\n img = cv2.resize(img, (w, h), interpolation=inter_pol)\n return img\n\ncrops the image around the center based on given height and width\ndef center_crop(img, out_height, out_width):\n height, width, _ = img.shape\n left = int((width - out_width) / 2)\n right = int((width + out_width) / 2)\n top = int((height - out_height) / 2)\n bottom = int((height + out_height) / 2)\n img = img[top:bottom, left:right]\n return img\n\n\nsess = ort.InferenceSession(model)\n\ndef inference(img):\n img = cv2.imread(img)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n img = pre_process_edgetpu(img, (224, 224, 3))\n\n img_batch = np.expand_dims(img, axis=0)\n\n results = sess.run([\"Softmax:0\"], {\"images:0\": img_batch})[0]\n result = reversed(results[0].argsort()[-5:])\n resultdic = {}\n for r in result:\n resultdic[labels[str(r)]] = float(results[0][r])\n return resultdic\n\ntitle = \"EfficientNet-Lite4\"\ndescription = \"EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU.\"\nexamples = [['catonnx.jpg']]\ngr.Interface(inference, gr.Image(type=\"filepath\"), \"label\", title=title, description=description, examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": " examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)\n- Create an account on Hugging Face [here](https://huggingface.co/join).\n- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)\n- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.\n- Request to join ONNX Organization [here](https://huggingface.co/onnx).\n- Once approved transfer model from your username to ONNX organization\n- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)\n", "heading1": "How to contribute Gradio demos on HF spaces using ONNX models", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `launch()` method of `Blocks` or `Interface`. For example:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Soft())\n ...\n```\n\n
\n\n
\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:\n\n\n* `gr.themes.Base()` - the `\"base\"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.\n* `gr.themes.Default()` - the `\"default\"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.\n* `gr.themes.Origin()` - the `\"origin\"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.\n* `gr.themes.Citrus()` - the `\"citrus\"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.\n* `gr.themes.Monochrome()` - the `\"monochrome\"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper. \n* `gr.themes.Soft()` - the `\"soft\"` theme uses a purple primary color and white secondary color. It also increases the border radius around buttons and form elements and highlights labels.\n* `gr.themes.Glass()` - the `\"glass\"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.\n* `gr.themes.Ocean()` - the `\"ocean\"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": " the `\"ocean\"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:\n\n```python\nimport gradio as gr\n\ngr.themes.builder()\n```\n\n$demo_theme_builder\n\nYou can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.\n\nAs you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.\n\nIn the rest of the guide, we will cover building themes programmatically.\n\n", "heading1": "Using the Theme Builder", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.\n\nCore Colors\n\nThe first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.\n\nThe 3 color constructor arguments are:\n\n- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.\n- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.\n- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Default(primary_hue=\"red\", secondary_hue=\"pink\"))\n ...\n```\n\nor you could use the `Color` objects directly, like this:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink))\n```\n\n
\n\n
\n\nPredefined colors are:\n\n- `slate`\n- `gray`\n- `zinc`\n- `neutral`\n- `stone`\n- `red`\n- `orange`\n- `amber`\n- `yellow`\n- `lime`\n- `green`\n- `emerald`\n- `teal`\n- `cyan`\n- `sky`\n- `blue`\n- `indigo`\n- `violet`\n- `purple`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe nex", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ld`\n- `teal`\n- `cyan`\n- `sky`\n- `blue`\n- `indigo`\n- `violet`\n- `purple`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.\n\n- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.\n- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.\n- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Default(spacing_size=\"sm\", radius_size=\"none\"))\n ...\n```\n\nor you could use the `Size` objects directly, like this:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none))\n ...\n```\n\n
\n\n
\n\nThe predefined size objects are:\n\n- `radius_none`\n- `radius_sm`\n- `radius_md`\n- `radius_lg`\n- `spacing_sm`\n- `spacing_md`\n- `spacing_lg`\n- `text_sm`\n- `text_md`\n- `text_lg`\n\nYou could also create your own custom `Size` objects and pass them in.\n\nCore Fonts\n\nThe final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.\n\n- `font`: Th", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Mono\")`.\n\nYou could modify these values such as the following:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Default(font=[gr.themes.GoogleFont(\"Inconsolata\"), \"Arial\", \"sans-serif\"]))\n ...\n```\n\n
\n\n
\n\n", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n loader_color=\"FF0000\",\n slider_color=\"FF0000\",\n)\n\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=theme)\n```\n\nIn the example above, we've set the `loader_color` and `slider_color` variables to `FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.\n\nYour IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.\n\nCSS Variable Naming Conventions\n\nCSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:\n\n1. The target element, such as `button`, `slider`, or `block`.\n2. The target element type or sub-element, such as `button_primary`, or `block_label`.\n3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.\n4. Any relevant state, such as `button_primary_background_fill_hover`.\n5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.\n\nOf course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.\n\nCSS Variable Organization\n\nThough there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of indi", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "cing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.\n\nReferencing Core Variables\n\nTo reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n button_primary_background_fill=\"*primary_200\",\n button_primary_background_fill_hover=\"*primary_300\",\n)\n```\n\nIn the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.\n\nSimilarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:\n\n```python\ntheme = gr.themes.Default(radius_size=\"md\").set(\n button_primary_border_radius=\"*radius_xl\",\n)\n```\n\nIn the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.\n\nReferencing Other Variables\n\nVariables can also reference each other. For example, look at the example below:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"FF0000\",\n button_primary_border=\"FF0000\",\n)\n```\n\nHaving to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"F", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ll` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"*button_primary_background_fill\",\n button_primary_border=\"*button_primary_background_fill\",\n)\n```\n\nNow, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.\n\nThis is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.\n\nNote that dark mode variables automatically reference each other. For example:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_dark=\"AAAAAA\",\n button_primary_border=\"*button_primary_background_fill\",\n button_primary_border_dark=\"*button_primary_background_fill_dark\",\n)\n```\n\n`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.\n\n", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.\n\nOur new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.\n\n$code_theme_new_step_1\n\n
\n\n
\n\nThe Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.\n\nWe'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.\n\n$code_theme_new_step_2\n\n
\n\n
\n\nSee how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.\n\nLet's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.\n\n$code_theme_new_step_3\n\n
\n\n
\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code ", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ght\"\n\tframeborder=\"0\"\n>\n\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.\n\n", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!\n\nUploading a Theme\n\nThere are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.\n\n- Via the class instance\n\nEach theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.\n\n```python\nseafoam.push_to_hub(repo_name=\"seafoam\",\n version=\"0.0.1\",\n\t\t\t\t\ttoken=\"\")\n```\n\n- Via the command line\n\nFirst save the theme to disk\n\n```python\nseafoam.dump(filename=\"seafoam.json\")\n```\n\nThen use the `upload_theme` command:\n\n```bash\nupload_theme\\\n\"seafoam.json\"\\\n\"seafoam\"\\\n--version \"0.0.1\"\\\n--token \"\"\n```\n\nIn order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)\nas the `token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin) (which comes installed with `gradio`),\nyou can omit the `token` argument.\n\nThe `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.\nThat way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying\nabout changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.\n\nTheme Previews\n\nBy calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).\n\nFor example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).\n\n
\n\n\n
\n\nDiscovering Themes\n\nThe [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,\nit will automatically show up in the theme gallery after a couple of minutes.\n\nYou can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.\n\n
\n\n
\n\nDownloading\n\nTo use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:\n\n```python\nmy_theme = gr.Theme.from_hub(\"gradio/seafoam\")\n\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=my_theme)\n```\n\nYou can also pass the theme string directly to the `launch()` method of `Blocks` or `Interface` (e.g. `demo.launch(theme=\"gradio/seafoam\")`)\n\nYou can pin your app to an upstream theme version by using semantic versioning expressions.\n\nFor example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=\"gradio/seafoam@>=0.0.1,<0.1.0\")\n ....\n```\n\nEnjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!\nIf you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!\n\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ion: relative;\n padding-bottom: 56.25%;\n padding-top: 25px;\n height: 0;\n}\n.wrapper iframe {\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n}\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "By default, every Gradio demo includes a built-in queuing system that scales to thousands of requests. When a user of your app submits a request (i.e. submits an input to your function), Gradio adds the request to the queue, and requests are processed in order, generally speaking (this is not exactly true, as discussed below). When the user's request has finished processing, the Gradio server returns the result back to the user using server-side events (SSE). The SSE protocol has several advantages over simply using HTTP POST requests: \n\n(1) They do not time out -- most browsers raise a timeout error if they do not get a response to a POST request after a short period of time (e.g. 1 min). This can be a problem if your inference function takes longer than 1 minute to run or if many people are trying out your demo at the same time, resulting in increased latency.\n\n(2) They allow the server to send multiple updates to the frontend. This means, for example, that the server can send a real-time ETA of how long your prediction will take to complete.\n\nTo configure the queue, simply call the `.queue()` method before launching an `Interface`, `TabbedInterface`, `ChatInterface` or any `Blocks`. Here's an example:\n\n```py\nimport gradio as gr\n\napp = gr.Interface(lambda x:x, \"image\", \"image\")\napp.queue() <-- Sets up a queue with default parameters\napp.launch()\n```\n\n**How Requests are Processed from the Queue**\n\nWhen a Gradio server is launched, a pool of threads is used to execute requests from the queue. By default, the maximum size of this thread pool is `40` (which is the default inherited from FastAPI, on which the Gradio server is based). However, this does *not* mean that 40 requests are always processed in parallel from the queue. \n\nInstead, Gradio uses a **single-function-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do", "heading1": "Overview of Gradio's Queueing System", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do not see, for example, out-of-memory errors, due to multiple workers calling a machine learning model at the same time. Suppose you have 3 functions in your Gradio app: A, B, and C. And you see the following sequence of 7 requests come in from users using your app:\n\n```\n1 2 3 4 5 6 7\n-------------\nA B A A C B A\n```\n\nInitially, 3 workers will get dispatched to handle requests 1, 2, and 5 (corresponding to functions: A, B, C). As soon as any of these workers finish, they will start processing the next function in the queue of the same function type, e.g. the worker that finished processing request 1 will start processing request 3, and so on.\n\nIf you want to change this behavior, there are several parameters that can be used to configure the queue and help reduce latency. Let's go through them one-by-one.\n\n\nThe `default_concurrency_limit` parameter in `queue()`\n\nThe first parameter we will explore is the `default_concurrency_limit` parameter in `queue()`. This controls how many workers can execute the same event. By default, this is set to `1`, but you can set it to a higher integer: `2`, `10`, or even `None` (in the last case, there is no limit besides the total number of available workers). \n\nThis is useful, for example, if your Gradio app does not call any resource-intensive functions. If your app only queries external APIs, then you can set the `default_concurrency_limit` much higher. Increasing this parameter can **linearly multiply the capacity of your server to handle requests**.\n\nSo why not set this parameter much higher all the time? Keep in mind that since requests are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrenc", "heading1": "Overview of Gradio's Queueing System", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "sts are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrency_limit` too high. You may also start to get diminishing returns if the `default_concurrency_limit` is too high because of costs of switching between different worker threads.\n\n**Recommendation**: Increase the `default_concurrency_limit` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview).\n\n\nThe `concurrency_limit` parameter in events\n\nYou can also set the number of requests that can be processed in parallel for each event individually. These take priority over the `default_concurrency_limit` parameter described previously.\n\nTo do this, set the `concurrency_limit` parameter of any event listener, e.g. `btn.click(..., concurrency_limit=20)` or in the `Interface` or `ChatInterface` classes: e.g. `gr.Interface(..., concurrency_limit=20)`. By default, this parameter is set to the global `default_concurrency_limit`.\n\n\nThe `max_threads` parameter in `launch()`\n\nIf your demo uses non-async functions, e.g. `def` instead of `async def`, they will be run in a threadpool. This threadpool has a size of 40 meaning that only 40 threads can be created to run your non-async functions. If you are running into this limit, you can increase the threadpool size with `max_threads`. The default value is 40.\n\nTip: You should use async functions whenever possible to increase the number of concurrent requests your app can handle. Quick functions that are not CPU-bound are good candidates to be written as `async`. This [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept.\n\n\nThe `max_size` parameter in `queue()`\n\nA more blunt way to reduce the wait times is simply to prevent too many pe", "heading1": "Overview of Gradio's Queueing System", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "is [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept.\n\n\nThe `max_size` parameter in `queue()`\n\nA more blunt way to reduce the wait times is simply to prevent too many people from joining the queue in the first place. You can set the maximum number of requests that the queue processes using the `max_size` parameter of `queue()`. If a request arrives when the queue is already of the maximum size, it will not be allowed to join the queue and instead, the user will receive an error saying that the queue is full and to try again. By default, `max_size=None`, meaning that there is no limit to the number of users that can join the queue.\n\nParadoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.\n\n**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction.\n\nThe `max_batch_size` parameter in events\n\nAnother way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.\n\nIf you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.\n\nWhile setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than set", "heading1": "Overview of Gradio's Queueing System", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "e passed into `gr.Interface()` or to an event in Blocks such as `.click()`.\n\nWhile setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.\n\nHere's an example of a function that does _not_ accept a batch of inputs -- it processes a single input at a time:\n\n```py\nimport time\n\ndef trim_words(word, length):\n return word[:int(length)]\n\n```\n\nHere's the same function rewritten to take in a batch of samples:\n\n```py\nimport time\n\ndef trim_words(words, lengths):\n trimmed_words = []\n for w, l in zip(words, lengths):\n trimmed_words.append(w[:int(l)])\n return [trimmed_words]\n\n```\n\nThe second function can be used with `batch=True` and an appropriate `max_batch_size` parameter.\n\n**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits.\n\n", "heading1": "Overview of Gradio's Queueing System", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "If you have done everything above, and your demo is still not fast enough, you can upgrade the hardware that your model is running on. Changing the model from running on CPUs to running on GPUs will usually provide a 10x-50x increase in inference time for deep learning models.\n\nIt is particularly straightforward to upgrade your Hardware on Hugging Face Spaces. Simply click on the \"Settings\" tab in your Space and choose the Space Hardware you'd like.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings.png)\n\nWhile you might need to adapt portions of your machine learning inference code to run on a GPU (here's a [handy guide](https://cnvrg.io/pytorch-cuda/) if you are using PyTorch), Gradio is completely agnostic to the choice of hardware and will work completely fine if you use it with CPUs, GPUs, TPUs, or any other hardware!\n\nNote: your GPU memory is different than your CPU memory, so if you upgrade your hardware,\nyou might need to adjust the value of the `default_concurrency_limit` parameter described above.\n\n", "heading1": "Upgrading your Hardware (GPUs, TPUs, etc.)", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/setting-up-a-demo-for-maximum-performance", "source_page_title": "Other Tutorials - Setting Up A Demo For Maximum Performance Guide"}, {"text": "App-level parameters have been moved from `Blocks` to `launch()`\n\nThe `gr.Blocks` class constructor previously contained several parameters that applied to your entire Gradio app, specifically:\n\n* `theme`: The theme for your Gradio app\n* `css`: Custom CSS code as a string\n* `css_paths`: Paths to custom CSS files\n* `js`: Custom JavaScript code\n* `head`: Custom HTML code to insert in the head of the page\n* `head_paths`: Paths to custom HTML files to insert in the head\n\nSince `gr.Blocks` can be nested and are not necessarily unique to a Gradio app, these parameters have now been moved to `Blocks.launch()`, which can only be called once for your entire Gradio app.\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks(\n theme=gr.themes.Soft(),\n css=\".my-class { color: red; }\",\n) as demo:\n gr.Textbox(label=\"Input\")\n\ndemo.launch()\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(label=\"Input\")\n\ndemo.launch(\n theme=gr.themes.Soft(),\n css=\".my-class { color: red; }\",\n)\n```\n\nThis change makes it clearer that these parameters apply to the entire app and not to individual `Blocks` instances.\n\n`show_api` parameter replaced with `footer_links`\n\nThe `show_api` parameter in `launch()` has been replaced with a more flexible `footer_links` parameter that allows you to control which links appear in the footer of your Gradio app.\n\n**In Gradio 5.x:**\n- `show_api=True` (default) showed the API documentation link in the footer\n- `show_api=False` hid the API documentation link\n\n**In Gradio 6.x:**\n- `footer_links` accepts a list of strings: `[\"api\", \"gradio\", \"settings\"]`\n- You can now control precisely which footer links are shown:\n - `\"api\"`: Shows the API documentation link\n - `\"gradio\"`: Shows the \"Built with Gradio\" link\n - `\"settings\"`: Shows the settings link\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(label=\"Input\")\n\ndemo.launch(sho", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "he \"Built with Gradio\" link\n - `\"settings\"`: Shows the settings link\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(label=\"Input\")\n\ndemo.launch(show_api=False)\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(label=\"Input\")\n\ndemo.launch(footer_links=[\"gradio\", \"settings\"])\n```\n\nTo replicate the old behavior:\n- `show_api=True` \u2192 `footer_links=[\"api\", \"gradio\", \"settings\"]` (or just omit the parameter, as this is the default)\n- `show_api=False` \u2192 `footer_links=[\"gradio\", \"settings\"]`\n\nEvent listener parameters: `show_api` removed and `api_name=False` no longer supported\n\nIn event listeners (such as `.click()`, `.change()`, etc.), the `show_api` parameter has been removed, and `api_name` no longer accepts `False` as a valid value. These have been replaced with a new `api_visibility` parameter that provides more fine-grained control.\n\n**In Gradio 5.x:**\n- `show_api=True` (default) showed the endpoint in the API documentation\n- `show_api=False` hid the endpoint from API docs but still allowed downstream apps to use it\n- `api_name=False` completely disabled the API endpoint (no downstream apps could use it)\n\n**In Gradio 6.x:**\n- `api_visibility` accepts one of three string values:\n - `\"public\"`: The endpoint is shown in API docs and accessible to all (equivalent to old `show_api=True`)\n - `\"undocumented\"`: The endpoint is hidden from API docs but still accessible to downstream apps (equivalent to old `show_api=False`)\n - `\"private\"`: The endpoint is completely disabled and inaccessible (equivalent to old `api_name=False`)\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n btn = gr.Button(\"Click me\")\n output = gr.Textbox()\n \n btn.click(fn=lambda: \"Hello\", outputs=output, show_api=False)\n \ndemo.launch()\n```\n\nOr to completely disable the API:\n\n```python\nbtn.click(fn=lambda: \"Hello\", outputs=output, api_name=False)\n`", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": " \n btn.click(fn=lambda: \"Hello\", outputs=output, show_api=False)\n \ndemo.launch()\n```\n\nOr to completely disable the API:\n\n```python\nbtn.click(fn=lambda: \"Hello\", outputs=output, api_name=False)\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n btn = gr.Button(\"Click me\")\n output = gr.Textbox()\n \n btn.click(fn=lambda: \"Hello\", outputs=output, api_visibility=\"undocumented\")\n \ndemo.launch()\n```\n\nOr to completely disable the API:\n\n```python\nbtn.click(fn=lambda: \"Hello\", outputs=output, api_visibility=\"private\")\n```\n\nTo replicate the old behavior:\n- `show_api=True` \u2192 `api_visibility=\"public\"` (or just omit the parameter, as this is the default)\n- `show_api=False` \u2192 `api_visibility=\"undocumented\"`\n- `api_name=False` \u2192 `api_visibility=\"private\"`\n\n`like_user_message` moved from `.like()` event to constructor \n\nThe `like_user_message` parameter has been moved from the `.like()` event listener to the Chatbot constructor.\n\n**Before (Gradio 5.x):**\n```python\nchatbot = gr.Chatbot()\nchatbot.like(print_like_dislike, None, None, like_user_message=True)\n```\n\n**After (Gradio 6.x):**\n```python\nchatbot = gr.Chatbot(like_user_message=True)\nchatbot.like(print_like_dislike, None, None)\n```\n\n\nDefault API names for `Interface` and `ChatInterface` now use function names\n\nThe default API endpoint names for `gr.Interface` and `gr.ChatInterface` have changed to be consistent with how `gr.Blocks` events work and to better support MCP (Model Context Protocol) tools.\n\n**In Gradio 5.x:**\n- `gr.Interface` had a default API name of `/predict`\n- `gr.ChatInterface` had a default API name of `/chat`\n\n**In Gradio 6.x:**\n- Both `gr.Interface` and `gr.ChatInterface` now use the name of the function you pass in as the default API endpoint name\n- This makes the API more descriptive and consistent with `gr.Blocks` behavior\n\nE.g. if your Gradio app is:\n\n```python\nimport gradio as gr\n\ndef generate_text(prompt):\n return f\"Generated: {prompt}\"\n\n", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "This makes the API more descriptive and consistent with `gr.Blocks` behavior\n\nE.g. if your Gradio app is:\n\n```python\nimport gradio as gr\n\ndef generate_text(prompt):\n return f\"Generated: {prompt}\"\n\ndemo = gr.Interface(fn=generate_text, inputs=\"text\", outputs=\"text\")\ndemo.launch()\n```\n\nPreviously, the API endpoint that Gradio generated would be: `/predict`. Now, the API endpoint will be: `/generate_text`\n\n**To maintain the old endpoint names:**\n\nIf you need to keep the old endpoint names for backward compatibility (e.g., if you have external services calling these endpoints), you can explicitly set the `api_name` parameter:\n\n```python\ndemo = gr.Interface(fn=generate_text, inputs=\"text\", outputs=\"text\", api_name=\"predict\")\n```\n\nSimilarly for `ChatInterface`:\n\n```python\ndemo = gr.ChatInterface(fn=chat_function, api_name=\"chat\")\n```\n\n`gr.Chatbot` and `gr.ChatInterface` tuple format removed\n\nThe tuple format for chatbot messages has been removed in Gradio 6.0. You must now use the messages format with dictionaries containing \"role\" and \"content\" keys.\n\n**In Gradio 5.x:**\n- You could use `type=\"tuples\"` or the default tuple format: `[[\"user message\", \"assistant message\"], ...]`\n- The tuple format was a list of lists where each inner list had two elements: `[user_message, assistant_message]`\n\n**In Gradio 6.x:**\n- Only the messages format is supported: `type=\"messages\"`\n- Messages must be dictionaries with \"role\" and \"content\" keys: `[{\"role\": \"user\", \"content\": \"Hello\"}, {\"role\": \"assistant\", \"content\": \"Hi there!\"}]`\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nUsing tuple format\nchatbot = gr.Chatbot(value=[[\"Hello\", \"Hi there!\"]])\n```\n\nOr with `type=\"tuples\"`:\n\n```python\nchatbot = gr.Chatbot(value=[[\"Hello\", \"Hi there!\"]], type=\"tuples\")\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nMust use messages format\nchatbot = gr.Chatbot(\n value=[\n {\"role\": \"user\", \"content\": \"Hello\"},\n {\"role\": \"assistant\", \"content\": \"Hi the", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "adio 6.x):**\n\n```python\nimport gradio as gr\n\nMust use messages format\nchatbot = gr.Chatbot(\n value=[\n {\"role\": \"user\", \"content\": \"Hello\"},\n {\"role\": \"assistant\", \"content\": \"Hi there!\"}\n ],\n type=\"messages\"\n)\n```\n\nSimilarly for `gr.ChatInterface`, if you were manually setting the chat history:\n\n```python\nBefore (Gradio 5.x)\ndemo = gr.ChatInterface(\n fn=chat_function,\n examples=[[\"Hello\", \"Hi there!\"]]\n)\n\nAfter (Gradio 6.x)\ndemo = gr.ChatInterface(\n fn=chat_function,\n examples=[{\"role\": \"user\", \"content\": \"Hello\"}, {\"role\": \"assistant\", \"content\": \"Hi there!\"}]\n)\n```\n\n**Note:** If you're using `gr.ChatInterface` with a function that returns messages, the function should return messages in the new format. The tuple format is no longer supported.\n\n`gr.ChatInterface` `history` format now uses structured content\n\nThe `history` format in `gr.ChatInterface` has been updated to consistently use OpenAI-style structured content format. Content is now always a list of content blocks, even for simple text messages.\n\n**In Gradio 5.x:**\n- Content could be a simple string: `{\"role\": \"user\", \"content\": \"Hello\"}`\n- Simple text messages used a string directly\n\n**In Gradio 6.x:**\n- Content is always a list of content blocks: `{\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Hello\"}]}`\n- This format is consistent with OpenAI's message format and supports multimodal content (text, images, etc.)\n\n**Before (Gradio 5.x):**\n\n```python\nhistory = [\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"Paris\"}\n]\n```\n\n**After (Gradio 6.x):**\n\n```python\nhistory = [\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"What is the capital of France?\"}]},\n {\"role\": \"assistant\", \"content\": [{\"type\": \"text\", \"text\": \"Paris\"}]}\n]\n```\n\n**With files:**\n\nWhen files are uploaded in the chat, they are represented as content blocks with `\"type\": \"file\"`. All content blocks (files and text) are gro", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "type\": \"text\", \"text\": \"Paris\"}]}\n]\n```\n\n**With files:**\n\nWhen files are uploaded in the chat, they are represented as content blocks with `\"type\": \"file\"`. All content blocks (files and text) are grouped together in the same message's content array:\n\n```python\nhistory = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"file\", \"file\": {\"path\": \"cat1.png\"}},\n {\"type\": \"file\", \"file\": {\"path\": \"cat2.png\"}},\n {\"type\": \"text\", \"text\": \"What's the difference between these two images?\"}\n ]\n }\n]\n```\n\nThis structured format allows for multimodal content (text, images, files, etc.) in chat messages, making it consistent with OpenAI's API format. All files uploaded in a single message are grouped together in the `content` array along with any text content.\n\n`cache_examples` parameter updated and `cache_mode` introduced\n\nThe `cache_examples` parameter (used in `Interface`, `ChatInterface`, and `Examples`) no longer accepts the string value `\"lazy\"`. It now strictly accepts boolean values (`True` or `False`). To control the caching strategy, a new `cache_mode` parameter has been introduced.\n\n**In Gradio 5.x:**\n- `cache_examples` accepted `True`, `False`, or `\"lazy\"`.\n\n**In Gradio 6.x:**\n- `cache_examples` only accepts `True` or `False`.\n- `cache_mode` accepts `\"eager\"` (default) or `\"lazy\"`.\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(\n fn=predict, \n inputs=\"text\", \n outputs=\"text\", \n examples=[\"Hello\", \"World\"],\n cache_examples=\"lazy\"\n)\n```\n\n**After (Gradio 6.x):**\n\nYou must now set `cache_examples=True` and specify the mode separately:\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(\n fn=predict, \n inputs=\"text\", \n outputs=\"text\", \n examples=[\"Hello\", \"World\"],\n cache_examples=True,\n cache_mode=\"lazy\"\n)\n```\n\nIf you previously used `cache_examples=True` (which implied eager caching), no changes are required, as `cache_mode` defaults to `\"eager\"`.\n", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "ld\"],\n cache_examples=True,\n cache_mode=\"lazy\"\n)\n```\n\nIf you previously used `cache_examples=True` (which implied eager caching), no changes are required, as `cache_mode` defaults to `\"eager\"`.\n\n", "heading1": "App-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "`gr.Video` no longer accepts tuple values for video and subtitles\n\nThe tuple format for returning video with subtitles has been deprecated. Instead of returning a tuple `(video_path, subtitle_path)`, you should now use the `gr.Video` component directly with the `subtitles` parameter.\n\n**In Gradio 5.x:**\n- You could return a tuple of `(video_path, subtitle_path)` from a function\n- The tuple format was `(str | Path, str | Path | None)`\n\n**In Gradio 6.x:**\n- Return a `gr.Video` component instance with the `subtitles` parameter\n- This provides more flexibility and consistency with other components\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\ndef generate_video_with_subtitles(input):\n video_path = \"output.mp4\"\n subtitle_path = \"subtitles.srt\"\n return (video_path, subtitle_path)\n\ndemo = gr.Interface(\n fn=generate_video_with_subtitles,\n inputs=\"text\",\n outputs=gr.Video()\n)\ndemo.launch()\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\ndef generate_video_with_subtitles(input):\n video_path = \"output.mp4\"\n subtitle_path = \"subtitles.srt\"\n return gr.Video(value=video_path, subtitles=subtitle_path)\n\ndemo = gr.Interface(\n fn=generate_video_with_subtitles,\n inputs=\"text\",\n outputs=gr.Video()\n)\ndemo.launch()\n```\n\n`gr.HTML` `padding` parameter default changed to `False`\n\nThe default value of the `padding` parameter in `gr.HTML` has been changed from `True` to `False` for consistency with `gr.Markdown`.\n\n**In Gradio 5.x:**\n- `padding=True` was the default for `gr.HTML`\n- HTML components had padding by default\n\n**In Gradio 6.x:**\n- `padding=False` is the default for `gr.HTML`\n- This matches the default behavior of `gr.Markdown` for consistency\n\n**To maintain the old behavior:**\n\nIf you want to keep the padding that was present in Gradio 5.x, explicitly set `padding=True`:\n\n```python\nhtml = gr.HTML(\"
Content
\", padding=True)\n```\n\n\n`gr.Dataframe` `row_count` and `col_count` parameters restructured\n\nThe `r", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "present in Gradio 5.x, explicitly set `padding=True`:\n\n```python\nhtml = gr.HTML(\"
Content
\", padding=True)\n```\n\n\n`gr.Dataframe` `row_count` and `col_count` parameters restructured\n\nThe `row_count` and `col_count` parameters in `gr.Dataframe` have been restructured to provide more flexibility and clarity. The tuple format for specifying fixed/dynamic behavior has been replaced with separate parameters for initial counts and limits.\n\n**In Gradio 5.x:**\n- `row_count: int | tuple[int, str]` - Could be an int or tuple like `(5, \"fixed\")` or `(5, \"dynamic\")`\n- `col_count: int | tuple[int, str] | None` - Could be an int or tuple like `(3, \"fixed\")` or `(3, \"dynamic\")`\n\n**In Gradio 6.x:**\n- `row_count: int | None` - Just the initial number of rows to display\n- `row_limits: tuple[int | None, int | None] | None` - Tuple specifying (min_rows, max_rows) constraints\n- `column_count: int | None` - The initial number of columns to display\n- `column_limits: tuple[int | None, int | None] | None` - Tuple specifying (min_columns, max_columns) constraints\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nFixed number of rows (users can't add/remove rows)\ndf = gr.Dataframe(row_count=(5, \"fixed\"), col_count=(3, \"dynamic\"))\n```\n\nOr with dynamic rows:\n\n```python\nDynamic rows (users can add/remove rows)\ndf = gr.Dataframe(row_count=(5, \"dynamic\"), col_count=(3, \"fixed\"))\n```\n\nOr with just integers (defaults to dynamic):\n\n```python\ndf = gr.Dataframe(row_count=5, col_count=3)\n```\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nFixed number of rows (users can't add/remove rows)\ndf = gr.Dataframe(row_count=5, row_limits=(5, 5), column_count=3, column_limits=None)\n```\n\nOr with dynamic rows (users can add/remove rows):\n\n```python\nDynamic rows with no limits\ndf = gr.Dataframe(row_count=5, row_limits=None, column_count=3, column_limits=None)\n```\n\nOr with min/max constraints:\n\n```python\nRows between 3 and 10, columns between 2 and 5\ndf = gr.Dataframe(row_count=", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "r.Dataframe(row_count=5, row_limits=None, column_count=3, column_limits=None)\n```\n\nOr with min/max constraints:\n\n```python\nRows between 3 and 10, columns between 2 and 5\ndf = gr.Dataframe(row_count=5, row_limits=(3, 10), column_count=3, column_limits=(2, 5))\n```\n\n**Migration examples:**\n\n- `row_count=(5, \"fixed\")` \u2192 `row_count=5, row_limits=(5, 5)`\n- `row_count=(5, \"dynamic\")` \u2192 `row_count=5, row_limits=None`\n- `row_count=5` \u2192 `row_count=5, row_limits=None` (same behavior)\n- `col_count=(3, \"fixed\")` \u2192 `column_count=3, column_limits=(3, 3)`\n- `col_count=(3, \"dynamic\")` \u2192 `column_count=3, column_limits=None`\n- `col_count=3` \u2192 `column_count=3, column_limits=None` (same behavior)\n\n`allow_tags=True` is now the default for `gr.Chatbot`\n\nDue to the rise in LLMs returning HTML, markdown tags, and custom tags (such as `` tags), the default value of `allow_tags` in `gr.Chatbot` has changed from `False` to `True` in Gradio 6.\n\n**In Gradio 5.x:**\n- `allow_tags=False` was the default\n- All HTML and custom tags were sanitized/removed from chatbot messages (unless explicitly allowed)\n\n**In Gradio 6.x:**\n- `allow_tags=True` is the default\n- All custom tags (non-standard HTML tags) are preserved in chatbot messages\n- Standard HTML tags are still sanitized for security unless `sanitize_html=False`\n\n**Before (Gradio 5.x):**\n\n```python\nimport gradio as gr\n\nchatbot = gr.Chatbot()\n```\n\nThis would remove all tags from messages, including custom tags like ``.\n\n**After (Gradio 6.x):**\n\n```python\nimport gradio as gr\n\nchatbot = gr.Chatbot()\n```\n\nThis will now preserve custom tags like `` in the messages.\n\n**To maintain the old behavior:**\n\nIf you want to continue removing all tags from chatbot messages (the old default behavior), explicitly set `allow_tags=False`:\n\n```python\nimport gradio as gr\n\nchatbot = gr.Chatbot(allow_tags=False)\n```\n\n**Note:** You can also specify a list of specific tags to allow:\n\n```python\nchatbot = gr.Chatbot(allow_tags=[\"thinking\",", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "e`:\n\n```python\nimport gradio as gr\n\nchatbot = gr.Chatbot(allow_tags=False)\n```\n\n**Note:** You can also specify a list of specific tags to allow:\n\n```python\nchatbot = gr.Chatbot(allow_tags=[\"thinking\", \"tool_call\"])\n```\n\nThis will only preserve `` and `` tags while removing all other custom tags.\n\n\n\nOther removed component parameters\n\nSeveral component parameters have been removed in Gradio 6.0. These parameters were previously deprecated and have now been fully removed.\n\n`gr.Chatbot` removed parameters\n\n**`bubble_full_width`** - This parameter has been removed as it no longer has any effect.\n\n\n**`resizeable`** - This parameter (with the typo) has been removed. Use `resizable` instead.\n\n**Before (Gradio 5.x):**\n```python\nchatbot = gr.Chatbot(resizeable=True)\n```\n\n**After (Gradio 6.x):**\n```python\nchatbot = gr.Chatbot(resizable=True)\n```\n\n**`show_copy_button`, `show_copy_all_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nchatbot = gr.Chatbot(show_copy_button=True, show_copy_all_button=True, show_share_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nchatbot = gr.Chatbot(buttons=[\"copy\", \"copy_all\", \"share\"])\n```\n\n`gr.Audio` / `WaveformOptions` removed parameters\n\n**`show_controls`** - This parameter in `WaveformOptions` has been removed. Use `show_recording_waveform` instead.\n\n**Before (Gradio 5.x):**\n```python\naudio = gr.Audio(\n waveform_options=gr.WaveformOptions(show_controls=False)\n)\n```\n\n**After (Gradio 6.x):**\n```python\naudio = gr.Audio(\n waveform_options=gr.WaveformOptions(show_recording_waveform=False)\n)\n```\n\n**`min_length` and `max_length`** - These parameters have been removed. Use validators instead.\n\n**Before (Gradio 5.x):**\n```python\naudio = gr.Audio(min_length=1, max_length=10)\n```\n\n**After (Gradio 6.x):**\n```python\naudio = gr.Audio(\n validator=lambda audio: gr.validators.is_audio_correct_length(audio, min_length=1", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "*\n```python\naudio = gr.Audio(min_length=1, max_length=10)\n```\n\n**After (Gradio 6.x):**\n```python\naudio = gr.Audio(\n validator=lambda audio: gr.validators.is_audio_correct_length(audio, min_length=1, max_length=10)\n)\n```\n\n**`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\naudio = gr.Audio(show_download_button=True, show_share_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\naudio = gr.Audio(buttons=[\"download\", \"share\"])\n```\n\n**Note:** For components where `show_share_button` had a default of `None` (which would show the button on Spaces), you can use `buttons=[\"share\"]` to always show it, or omit it from the list to hide it.\n\n`gr.Image` removed parameters\n\n**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.\n\n**Before (Gradio 5.x):**\n```python\nimage = gr.Image(mirror_webcam=True)\n```\n\n**After (Gradio 6.x):**\n```python\nimage = gr.Image(webcam_options=gr.WebcamOptions(mirror=True))\n```\n\n**`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.\n\n**Before (Gradio 5.x):**\n```python\nimage = gr.Image(webcam_constraints={\"facingMode\": \"user\"})\n```\n\n**After (Gradio 6.x):**\n```python\nimage = gr.Image(webcam_options=gr.WebcamOptions(constraints={\"facingMode\": \"user\"}))\n```\n\n**`show_download_button`, `show_share_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nimage = gr.Image(show_download_button=True, show_share_button=True, show_fullscreen_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nimage = gr.Image(buttons=[\"download\", \"share\", \"fullscreen\"])\n```\n\n`gr.Video` removed parameters\n\n**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.\n\n**Before (Gradio 5.x):**\n```python\nvideo = gr.Video(m", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "\n`gr.Video` removed parameters\n\n**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.\n\n**Before (Gradio 5.x):**\n```python\nvideo = gr.Video(mirror_webcam=True)\n```\n\n**After (Gradio 6.x):**\n```python\nvideo = gr.Video(webcam_options=gr.WebcamOptions(mirror=True))\n```\n\n**`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.\n\n**Before (Gradio 5.x):**\n```python\nvideo = gr.Video(webcam_constraints={\"facingMode\": \"user\"})\n```\n\n**After (Gradio 6.x):**\n```python\nvideo = gr.Video(webcam_options=gr.WebcamOptions(constraints={\"facingMode\": \"user\"}))\n```\n\n**`min_length` and `max_length`** - These parameters have been removed. Use validators instead.\n\n**Before (Gradio 5.x):**\n```python\nvideo = gr.Video(min_length=1, max_length=10)\n```\n\n**After (Gradio 6.x):**\n```python\nvideo = gr.Video(\n validator=lambda video: gr.validators.is_video_correct_length(video, min_length=1, max_length=10)\n)\n```\n\n**`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nvideo = gr.Video(show_download_button=True, show_share_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nvideo = gr.Video(buttons=[\"download\", \"share\"])\n```\n\n`gr.ImageEditor` removed parameters\n\n**`crop_size`** - This parameter has been removed. Use `canvas_size` instead.\n\n**Before (Gradio 5.x):**\n```python\neditor = gr.ImageEditor(crop_size=(512, 512))\n```\n\n**After (Gradio 6.x):**\n```python\neditor = gr.ImageEditor(canvas_size=(512, 512))\n```\n\nRemoved components\n\n**`gr.LogoutButton`** - This component has been removed. Use `gr.LoginButton` instead, which handles both login and logout processes.\n\n**Before (Gradio 5.x):**\n```python\nlogout_btn = gr.LogoutButton()\n```\n\n**After (Gradio 6.x):**\n```python\nlogin_btn = gr.LoginButton()\n```\n\nNative plot components removed parameters\n\nThe following parameters have ", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "5.x):**\n```python\nlogout_btn = gr.LogoutButton()\n```\n\n**After (Gradio 6.x):**\n```python\nlogin_btn = gr.LoginButton()\n```\n\nNative plot components removed parameters\n\nThe following parameters have been removed from `gr.LinePlot`, `gr.BarPlot`, and `gr.ScatterPlot`:\n\n- `overlay_point` - This parameter has been removed.\n- `width` - This parameter has been removed. Use CSS styling or container width instead.\n- `stroke_dash` - This parameter has been removed.\n- `interactive` - This parameter has been removed.\n- `show_actions_button` - This parameter has been removed.\n- `color_legend_title` - This parameter has been removed. Use `color_title` instead.\n- `show_fullscreen_button`, `show_export_button` - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nplot = gr.LinePlot(\n value=data,\n x=\"date\",\n y=\"downloads\",\n overlay_point=True,\n width=900,\n show_fullscreen_button=True,\n show_export_button=True\n)\n```\n\n**After (Gradio 6.x):**\n```python\nplot = gr.LinePlot(\n value=data,\n x=\"date\",\n y=\"downloads\",\n buttons=[\"fullscreen\", \"export\"]\n)\n```\n\n**Note:** For `color_legend_title`, use `color_title` instead:\n\n**Before (Gradio 5.x):**\n```python\nplot = gr.ScatterPlot(color_legend_title=\"Category\")\n```\n\n**After (Gradio 6.x):**\n```python\nplot = gr.ScatterPlot(color_title=\"Category\")\n```\n\n`gr.Textbox` removed parameters\n\n**`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\ntext = gr.Textbox(show_copy_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\ntext = gr.Textbox(buttons=[\"copy\"])\n```\n\n`gr.Markdown` removed parameters\n\n**`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nmarkdown = gr.Markdown(show_copy_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nmarkdown = gr.Markdown(buttons=[\"copy\"])\n```\n\n`gr.Dataframe` remove", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "stead.\n\n**Before (Gradio 5.x):**\n```python\nmarkdown = gr.Markdown(show_copy_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nmarkdown = gr.Markdown(buttons=[\"copy\"])\n```\n\n`gr.Dataframe` removed parameters\n\n**`show_copy_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\ndf = gr.Dataframe(show_copy_button=True, show_fullscreen_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\ndf = gr.Dataframe(buttons=[\"copy\", \"fullscreen\"])\n```\n\n`gr.Slider` removed parameters\n\n**`show_reset_button`** - This parameter has been removed. Use the `buttons` parameter instead.\n\n**Before (Gradio 5.x):**\n```python\nslider = gr.Slider(show_reset_button=True)\n```\n\n**After (Gradio 6.x):**\n```python\nslider = gr.Slider(buttons=[\"reset\"])\n```\n\n\n", "heading1": "Component-level Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "`gradio sketch` command removed\n\nThe `gradio sketch` command-line tool has been deprecated and completely removed in Gradio 6. This tool was used to create Gradio apps through a visual interface.\n\n**In Gradio 5.x:**\n- You could run `gradio sketch` to launch an interactive GUI for building Gradio apps\n- The tool would generate Python code visually\n\n**In Gradio 6.x:**\n- The `gradio sketch` command has been removed\n- Running `gradio sketch` will raise a `DeprecationWarning`\n\n", "heading1": "CLI Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "`hf_token` parameter renamed to `token` in `Client`\n\nThe `hf_token` parameter in the `Client` class has been renamed to `token` for consistency and simplicity.\n\n**Before (Gradio 5.x):**\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", hf_token=\"hf_...\")\n```\n\n**After (Gradio 6.x):**\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", token=\"hf_...\")\n```\n\n`deploy_discord` method deprecated\n\nThe `deploy_discord` method in the `Client` class has been deprecated and will be removed in Gradio 6.0. This method was used to deploy Gradio apps as Discord bots.\n\n**Before (Gradio 5.x):**\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"username/space-name\")\nclient.deploy_discord(discord_bot_token=\"...\")\n```\n\n**After (Gradio 6.x):**\n\nThe `deploy_discord` method is no longer available. Please see the [documentation on creating a Discord bot with Gradio](https://www.gradio.app/guides/creating-a-discord-bot-from-a-gradio-app) for alternative approaches.\n\n`AppError` now subclasses `Exception` instead of `ValueError`\n\nThe `AppError` exception class in the Python client now subclasses `Exception` directly instead of `ValueError`. This is a breaking change if you have code that specifically catches `ValueError` to handle `AppError` instances.\n\n**Before (Gradio 5.x):**\n\n```python\nfrom gradio_client import Client\nfrom gradio_client.exceptions import AppError\n\ntry:\n client = Client(\"username/space-name\")\n result = client.predict(\"/predict\", inputs)\nexcept ValueError as e:\n This would catch AppError in Gradio 5.x\n print(f\"Error: {e}\")\n```\n\n**After (Gradio 6.x):**\n\n```python\nfrom gradio_client import Client\nfrom gradio_client.exceptions import AppError\n\ntry:\n client = Client(\"username/space-name\")\n result = client.predict(\"/predict\", inputs)\nexcept AppError as e:\n Explicitly catch AppError\n print(f\"App error: {e}\")\nexcept ValueError as e:\n This will no lon", "heading1": "Python Client Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "\"username/space-name\")\n result = client.predict(\"/predict\", inputs)\nexcept AppError as e:\n Explicitly catch AppError\n print(f\"App error: {e}\")\nexcept ValueError as e:\n This will no longer catch AppError\n print(f\"Value error: {e}\")\n```\n\n", "heading1": "Python Client Changes", "source_page_url": "https://gradio.app/guides/gradio-6-migration-guide", "source_page_title": "Other Tutorials - Gradio 6 Migration Guide Guide"}, {"text": "To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes.\n\n1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)\n\n2. In the Cloud Console, click on the hamburger menu in the top-left corner and select \"APIs & Services\" from the menu. If you do not have an existing project, you will need to create one.\n\n3. Then, click the \"+ Enabled APIs & services\" button, which allows you to enable specific services for your project. Search for \"BigQuery API\", click on it, and click the \"Enable\" button. If you see the \"Manage\" button, then the BigQuery is already enabled, and you're all set.\n\n4. In the APIs & Services menu, click on the \"Credentials\" tab and then click on the \"Create credentials\" button.\n\n5. In the \"Create credentials\" dialog, select \"Service account key\" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as \"BigQuery User\", which will allow you to run queries.\n\n6. After selecting the service account, select the \"JSON\" key type and then click on the \"Create\" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:\n\n```json\n{\n\t\"type\": \"service_account\",\n\t\"project_id\": \"your project\",\n\t\"private_key_id\": \"your private key id\",\n\t\"private_key\": \"private key\",\n\t\"client_email\": \"email\",\n\t\"client_id\": \"client id\",\n\t\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n\t\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```\n\n", "heading1": "Setting up your BigQuery Credentials", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal:\n\n```bash\npip install google-cloud-bigquery[pandas]\n```\n\nYou'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code:\n\n```py\nfrom google.cloud import bigquery\n\nclient = bigquery.Client.from_service_account_json(\"path/to/key.json\")\n```\n\nWith your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.\n\nHere is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:\n\n```py\nimport numpy as np\n\nQUERY = (\n 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '\n 'ORDER BY date DESC,confirmed_cases DESC '\n 'LIMIT 20')\n\ndef run_query():\n query_job = client.query(QUERY)\n query_result = query_job.result()\n df = query_result.to_dataframe()\n Select a subset of columns\n df = df[[\"confirmed_cases\", \"deaths\", \"county\", \"state_name\"]]\n Convert numeric columns to standard numpy types\n df = df.astype({\"deaths\": np.int64, \"confirmed_cases\": np.int64})\n return df\n```\n\n", "heading1": "Using the BigQuery Client", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.\n\nHere is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\\*60 seconds).\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n\ndemo.launch()\n```\n\nPerhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time\nby passing in the `every` parameter.\n\nHere is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"\ud83d\udc89 Covid Dashboard (Updated Hourly)\")\n with gr.Row():\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n gr.ScatterPlot(run_query, every=gr.Timer(60*60), x=\"confirmed_cases\",\n y=\"deaths\", tooltip=\"county\", width=500, height=500)\n\ndemo.queue().launch() Run the demo with queuing enabled\n```\n", "heading1": "Building the Real-Time Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!\n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/using-gradio-for-tabular-workflows", "source_page_title": "Other Tutorials - Using Gradio For Tabular Workflows Guide"}, {"text": "We will take a look at how we can create a simple UI that predicts failures based on product information.\n\n```python\nimport gradio as gr\nimport pandas as pd\nimport joblib\nimport datasets\n\n\ninputs = [gr.Dataframe(row_count = (2, \"dynamic\"), col_count=(4,\"dynamic\"), label=\"Input Data\", interactive=1)]\n\noutputs = [gr.Dataframe(row_count = (2, \"dynamic\"), col_count=(1, \"fixed\"), label=\"Predictions\", headers=[\"Failures\"])]\n\nmodel = joblib.load(\"model.pkl\")\n\nwe will give our dataframe as example\ndf = datasets.load_dataset(\"merve/supersoaker-failures\")\ndf = df[\"train\"].to_pandas()\n\ndef infer(input_dataframe):\n return pd.DataFrame(model.predict(input_dataframe))\n\ngr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()\n```\n\nLet's break down above code.\n\n- `fn`: the inference function that takes input dataframe and returns predictions.\n- `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.\n- `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.\n- `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.\n\nWe will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.\n\n\n\n```python\nimport gradio as gr\nimport pandas as pd\nimport datasets\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndf = datasets.load_dataset", "heading1": "Let's Create a Simple Interface!", "source_page_url": "https://gradio.app/guides/using-gradio-for-tabular-workflows", "source_page_title": "Other Tutorials - Using Gradio For Tabular Workflows Guide"}, {"text": "app space=\"gradio/tabular-playground\">
\n\n```python\nimport gradio as gr\nimport pandas as pd\nimport datasets\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndf = datasets.load_dataset(\"merve/supersoaker-failures\")\ndf = df[\"train\"].to_pandas()\ndf.dropna(axis=0, inplace=True)\n\ndef plot(df):\n plt.scatter(df.measurement_13, df.measurement_15, c = df.loading,alpha=0.5)\n plt.savefig(\"scatter.png\")\n df['failure'].value_counts().plot(kind='bar')\n plt.savefig(\"bar.png\")\n sns.heatmap(df.select_dtypes(include=\"number\").corr())\n plt.savefig(\"corr.png\")\n plots = [\"corr.png\",\"scatter.png\", \"bar.png\"]\n return plots\n\ninputs = [gr.Dataframe(label=\"Supersoaker Production Data\")]\noutputs = [gr.Gallery(label=\"Profiling Dashboard\", columns=(1,3))]\n\ngr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], title=\"Supersoaker Failures Analysis Dashboard\").launch()\n```\n\n\n\nWe will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.\n\n- `fn`: The function that will create plots based on data.\n- `inputs`: We use the same `Dataframe` component we used above.\n- `outputs`: The `Gallery` component is used to keep our visualizations.\n- `examples`: We will have the dataset itself as the example.\n\n", "heading1": "Let's Create a Simple Interface!", "source_page_url": "https://gradio.app/guides/using-gradio-for-tabular-workflows", "source_page_title": "Other Tutorials - Using Gradio For Tabular Workflows Guide"}, {"text": "`skops` is a library built on top of `huggingface_hub` and `sklearn`. With the recent `gradio` integration of `skops`, you can build tabular data interfaces with one line of code!\n\n```python\nimport gradio as gr\n\ntitle and description are optional\ntitle = \"Supersoaker Defective Product Prediction\"\ndescription = \"This model predicts Supersoaker production line failures. Drag and drop any slice from dataset or edit values as you wish in below dataframe component.\"\n\ngr.load(\"huggingface/scikit-learn/tabular-playground\", title=title, description=description).launch()\n```\n\n\n\n`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.htmlsphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.\n", "heading1": "Easily load tabular data interfaces with one line of code using skops", "source_page_url": "https://gradio.app/guides/using-gradio-for-tabular-workflows", "source_page_title": "Other Tutorials - Using Gradio For Tabular Workflows Guide"}, {"text": "Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.\n\nIn some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).\n\nIn this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.\n\n**Prerequisites**\n\n1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)\n2. A working Gradio app saved as a python file on your web server\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Start by editing the Nginx configuration file on your web server. By default, this is located at: `/etc/nginx/nginx.conf`\n\nIn the `http` block, add the following line to include server block configurations from a separate file:\n\n```bash\ninclude /etc/nginx/sites-enabled/*;\n```\n\n2. Create a new file in the `/etc/nginx/sites-available` directory (create the directory if it does not already exist), using a filename that represents your app, for example: `sudo nano /etc/nginx/sites-available/my_gradio_app`\n\n3. Paste the following into your file editor:\n\n```bash\nserver {\n listen 80;\n server_name example.com www.example.com; Change this to your domain name\n\n location /gradio-demo/ { Change this if you'd like to server your Gradio app on a different path\n proxy_pass http://127.0.0.1:7860/; Change this if your Gradio app will be running on a different port\n proxy_buffering off;\n proxy_redirect off;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}\n```\n\n\nTip: Setting the `X-Forwarded-Host` and `X-Forwarded-Proto` headers is important as Gradio uses these, in conjunction with the `root_path` parameter discussed below, to construct the public URL that your app is being served on. Gradio uses the public URL to fetch various static assets. If these headers are not set, your Gradio app may load in a broken state.\n\n*Note:* The `$host` variable does not include the host port. If you are serving your Gradio application on a raw IP address and port, you should use the `$http_host` variable instead, in these lines:\n\n```bash\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n```\n\n", "heading1": "Editing your Nginx configuration file", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Before you launch your Gradio app, you'll need to set the `root_path` to be the same as the subpath that you specified in your nginx configuration. This is necessary for Gradio to run on any subpath besides the root of the domain.\n\n *Note:* Instead of a subpath, you can also provide a complete URL for `root_path` (beginning with `http` or `https`) in which case the `root_path` is treated as an absolute URL instead of a URL suffix (but in this case, you'll need to update the `root_path` if the domain changes).\n\nHere's a simple example of a Gradio app with a custom `root_path` corresponding to the Nginx configuration above.\n\n```python\nimport gradio as gr\nimport time\n\ndef test(x):\ntime.sleep(4)\nreturn x\n\ngr.Interface(test, \"textbox\", \"textbox\").queue().launch(root_path=\"/gradio-demo\")\n```\n\n2. Start a `tmux` session by typing `tmux` and pressing enter (optional)\n\nIt's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily\n\n3. Then, start your Gradio app. Simply type in `python` followed by the name of your Gradio python file. By default, your app will run on `localhost:7860`, but if it starts on a different port, you will need to update the nginx configuration file above.\n\n", "heading1": "Run your Gradio app on your web server", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the \"D\" key.\n\n2. Finally, restart nginx by running `sudo systemctl restart nginx`.\n\nAnd that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there\n\n", "heading1": "Restart Nginx", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "Let's go through a simple example to understand how to containerize a Gradio app using Docker.\n\nStep 1: Create Your Gradio App\n\nFirst, we need a simple Gradio app. Let's create a Python file named `app.py` with the following content:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return f\"Hello {name}!\"\n\niface = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\").launch()\n```\n\nThis app creates a simple interface that greets the user by name.\n\nStep 2: Create a Dockerfile\n\nNext, we'll create a Dockerfile to specify how our app should be built and run in a Docker container. Create a file named `Dockerfile` in the same directory as your app with the following content:\n\n```dockerfile\nFROM python:3.10-slim\n\nWORKDIR /usr/src/app\nCOPY . .\nRUN pip install --no-cache-dir gradio\nEXPOSE 7860\nENV GRADIO_SERVER_NAME=\"0.0.0.0\"\n\nCMD [\"python\", \"app.py\"]\n```\n\nThis Dockerfile performs the following steps:\n- Starts from a Python 3.10 slim image.\n- Sets the working directory and copies the app into the container.\n- Installs Gradio (you should install all other requirements as well).\n- Exposes port 7860 (Gradio's default port).\n- Sets the `GRADIO_SERVER_NAME` environment variable to ensure Gradio listens on all network interfaces.\n- Specifies the command to run the app.\n\nStep 3: Build and Run Your Docker Container\n\nWith the Dockerfile in place, you can build and run your container:\n\n```bash\ndocker build -t gradio-app .\ndocker run -p 7860:7860 gradio-app\n```\n\nYour Gradio app should now be accessible at `http://localhost:7860`.\n\n", "heading1": "How to Dockerize a Gradio App", "source_page_url": "https://gradio.app/guides/deploying-gradio-with-docker", "source_page_title": "Other Tutorials - Deploying Gradio With Docker Guide"}, {"text": "When running Gradio applications in Docker, there are a few important things to keep in mind:\n\nRunning the Gradio app on `\"0.0.0.0\"` and exposing port 7860\n\nIn the Docker environment, setting `GRADIO_SERVER_NAME=\"0.0.0.0\"` as an environment variable (or directly in your Gradio app's `launch()` function) is crucial for allowing connections from outside the container. And the `EXPOSE 7860` directive in the Dockerfile tells Docker to expose Gradio's default port on the container to enable external access to the Gradio app. \n\nEnable Stickiness for Multiple Replicas\n\nWhen deploying Gradio apps with multiple replicas, such as on AWS ECS, it's important to enable stickiness with `sessionAffinity: ClientIP`. This ensures that all requests from the same user are routed to the same instance. This is important because Gradio's communication protocol requires multiple separate connections from the frontend to the backend in order for events to be processed correctly. (If you use Terraform, you'll want to add a [stickiness block](https://registry.terraform.io/providers/hashicorp/aws/3.14.1/docs/resources/lb_target_groupstickiness) into your target group definition.)\n\nDeploying Behind a Proxy\n\nIf you're deploying your Gradio app behind a proxy, like Nginx, it's essential to configure the proxy correctly. Gradio provides a [Guide that walks through the necessary steps](https://www.gradio.app/guides/running-gradio-on-your-web-server-with-nginx). This setup ensures your app is accessible and performs well in production environments.\n\n", "heading1": "Important Considerations", "source_page_url": "https://gradio.app/guides/deploying-gradio-with-docker", "source_page_title": "Other Tutorials - Deploying Gradio With Docker Guide"}, {"text": "In this guide we will demonstrate some of the ways you can use Gradio with Comet. We will cover the basics of using Comet with Gradio and show you some of the ways that you can leverage Gradio's advanced features such as [Embedding with iFrames](https://www.gradio.app/guides/sharing-your-app/embedding-with-iframes) and [State](https://www.gradio.app/docs/state) to build some amazing model evaluation workflows.\n\nHere is a list of the topics covered in this guide.\n\n1. Logging Gradio UI's to your Comet Experiments\n2. Embedding Gradio Applications directly into your Comet Projects\n3. Embedding Hugging Face Spaces directly into your Comet Projects\n4. Logging Model Inferences from your Gradio Application to Comet\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!\n\n", "heading1": "What is Comet?", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "First, install the dependencies needed to run these examples\n\n```shell\npip install comet_ml torch torchvision transformers gradio shap requests Pillow\n```\n\nNext, you will need to [sign up for a Comet Account](https://www.comet.com/signup?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs). Once you have your account set up, [grab your API Key](https://www.comet.com/docs/v2/guides/getting-started/quickstart/get-an-api-key?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) and configure your Comet credentials\n\nIf you're running these examples as a script, you can either export your credentials as environment variables\n\n```shell\nexport COMET_API_KEY=\"\"\nexport COMET_WORKSPACE=\"\"\nexport COMET_PROJECT_NAME=\"\"\n```\n\nor set them in a `.comet.config` file in your working directory. You file should be formatted in the following way.\n\n```shell\n[comet]\napi_key=\nworkspace=\nproject_name=\n```\n\nIf you are using the provided Colab Notebooks to run these examples, please run the cell with the following snippet before starting the Gradio UI. Running this cell allows you to interactively add your API key to the notebook.\n\n```python\nimport comet_ml\ncomet_ml.init()\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Gradio_and_Comet.ipynb)\n\nIn this example, we will go over how to log your Gradio Applications to Comet and interact with them using the Gradio Custom Panel.\n\nLet's start by building a simple Image Classification example using `resnet18`.\n\n```python\nimport comet_ml\n\nimport requests\nimport torch\nfrom PIL import Image\nfrom torchvision import transforms\n\ntorch.hub.download_url_to_file(\"https://github.com/pytorch/hub/raw/master/images/dog.jpg\", \"dog.jpg\")\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nmodel = torch.hub.load(\"pytorch/vision:v0.6.0\", \"resnet18\", pretrained=True).eval()\nmodel = model.to(device)\n\nDownload human-readable labels for ImageNet.\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\n\ndef predict(inp):\n inp = Image.fromarray(inp.astype(\"uint8\"), \"RGB\")\n inp = transforms.ToTensor()(inp).unsqueeze(0)\n with torch.no_grad():\n prediction = torch.nn.functional.softmax(model(inp.to(device))[0], dim=0)\n return {labels[i]: float(prediction[i]) for i in range(1000)}\n\n\ninputs = gr.Image()\noutputs = gr.Label(num_top_classes=3)\n\nio = gr.Interface(\n fn=predict, inputs=inputs, outputs=outputs, examples=[\"dog.jpg\"]\n)\nio.launch(inline=False, share=True)\n\nexperiment = comet_ml.Experiment()\nexperiment.add_tag(\"image-classifier\")\n\nio.integrate(comet_ml=experiment)\n```\n\nThe last line in this snippet will log the URL of the Gradio Application to your Comet Experiment. You can find the URL in the Text Tab of your Experiment.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\n", "heading1": "1. Logging Gradio UI's to your Comet Experiments", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nIf you are permanently hosting your Gradio application, you can embed the UI using the Gradio Panel Extended custom Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page.\n\n\"adding-panels\"\n\nNext, search for Gradio Panel Extended in the Public Panels section and click `Add`.\n\n\"gradio-panel-extended\"\n\nOnce you have added your Panel, click `Edit` to access to the Panel Options page and paste in the URL of your Gradio application.\n\n![Edit-Gradio-Panel-Options](https://user-images.githubusercontent.com/7529846/214573001-23814b5a-ca65-4ace-a8a5-b27cdda70f7a.gif)\n\n\"Edit-Gradio-Panel-URL\"\n\n", "heading1": "2. Embedding Gradio Applications directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nYou can also embed Gradio Applications that are hosted on Hugging Faces Spaces into your Comet Projects using the Hugging Face Spaces Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. Next, search for the Hugging Face Spaces Panel in the Public Panels section and click `Add`.\n\n\"huggingface-spaces-panel\"\n\nOnce you have added your Panel, click Edit to access to the Panel Options page and paste in the path of your Hugging Face Space e.g. `pytorch/ResNet`\n\n\"Edit-HF-Space\"\n\n", "heading1": "3. Embedding Hugging Face Spaces directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb)\n\nIn the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.\n\nIn the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/state) object. This will allow you to log multiple inferences from a model to a single Experiment.\n\n```python\nimport comet_ml\nimport gradio as gr\nimport shap\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nMODEL_NAME = \"gpt2\"\n\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_NAME)\n\nset model decoder to true\nmodel.config.is_decoder = True\nset text-generation params under task_specific_params\nmodel.config.task_specific_params[\"text-generation\"] = {\n \"do_sample\": True,\n \"max_length\": 50,\n \"temperature\": 0.7,\n \"top_k\": 50,\n \"no_repeat_ngram_size\": 2,\n}\nmodel = model.to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\nexplainer = shap.Explainer(model, tokenizer)\n\n\ndef start_experiment():\n \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": " \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_workspace()\n project_name = comet_ml.config.get_config()[\"comet.project_name\"]\n\n experiment = comet_ml.APIExperiment(\n workspace=workspace, project_name=project_name\n )\n experiment.log_other(\"Created from\", \"gradio-inference\")\n\n message = f\"Started Experiment: [{experiment.name}]({experiment.url})\"\n\n return (experiment, message)\n\n except Exception as e:\n return None, None\n\n\ndef predict(text, state, message):\n experiment = state\n\n shap_values = explainer([text])\n plot = shap.plots.text(shap_values, display=False)\n\n if experiment is not None:\n experiment.log_other(\"message\", message)\n experiment.log_html(plot)\n\n return plot\n\n\nwith gr.Blocks() as demo:\n start_experiment_btn = gr.Button(\"Start New Experiment\")\n experiment_status = gr.Markdown()\n\n Log a message to the Experiment to provide more context\n experiment_message = gr.Textbox(label=\"Experiment Message\")\n experiment = gr.State()\n\n input_text = gr.Textbox(label=\"Input Text\", lines=5, interactive=True)\n submit_btn = gr.Button(\"Submit\")\n\n output = gr.HTML(interactive=True)\n\n start_experiment_btn.click(\n start_experiment, outputs=[experiment, experiment_status]\n )\n submit_btn.click(\n predict, inputs=[input_text, experiment, experiment_message], outputs=[output]\n )\n```\n\nInferences from this snippet will be saved in the HTML tab of your experiment.\n\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "887c-065aca14dd30.mp4\">\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "We hope you found this guide useful and that it provides some inspiration to help you build awesome model evaluation workflows with Comet and Gradio.\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- Create an account on Hugging Face [here](https://huggingface.co/join).\n- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.\n- Request to join the Comet organization [here](https://huggingface.co/Comet).\n\n", "heading1": "How to contribute Gradio demos on HF spaces on the Comet organization", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)\n", "heading1": "Additional Resources", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):\n\n1\\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the \"Share\" button in the top-right corner, and then click on the \"Get shareable link\" button. This will give you a URL that looks something like this:\n\n```html\nhttps://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0\n```\n\n2\\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):\n\n```python\nimport pandas as pd\n\nURL = \"https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0\"\ncsv_url = URL.replace('/editgid=', '/export?format=csv&gid=')\n\ndef get_data():\n return pd.read_csv(csv_url)\n```\n\n3\\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"\ud83d\udcc8 Real-Time Line Plot\")\n with gr.Row():\n with gr.Column():\n gr.DataFrame(get_data, every=gr.Timer(5))\n with gr.Column():\n gr.LinePlot(get_data, every=gr.Timer(5), x=\"Date\", y=\"Sales\", y_title=\"Sales ($ millions)\", overlay_point=True, width=500, height=500)\n\ndemo.queue().launch() Run the demo with queuing enabled\n```\n\nAnd that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.\n\n", "heading1": "Public Google Sheets", "source_page_url": "https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets", "source_page_title": "Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide"}, {"text": "For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets.\n\nAuthentication\n\nTo authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):\n\n1\\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)\n\n2\\. In the Cloud Console, click on the hamburger menu in the top-left corner and select \"APIs & Services\" from the menu. If you do not have an existing project, you will need to create one.\n\n3\\. Then, click the \"+ Enabled APIs & services\" button, which allows you to enable specific services for your project. Search for \"Google Sheets API\", click on it, and click the \"Enable\" button. If you see the \"Manage\" button, then Google Sheets is already enabled, and you're all set.\n\n4\\. In the APIs & Services menu, click on the \"Credentials\" tab and then click on the \"Create credentials\" button.\n\n5\\. In the \"Create credentials\" dialog, select \"Service account key\" as the type of credentials to create, and give it a name. **Note down the email of the service account**\n\n6\\. After selecting the service account, select the \"JSON\" key type and then click on the \"Create\" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:\n\n```json\n{\n\t\"type\": \"service_account\",\n\t\"project_id\": \"your project\",\n\t\"private_key_id\": \"your private key id\",\n\t\"private_key\": \"private key\",\n\t\"client_email\": \"email\",\n\t\"client_id\": \"client id\",\n\t\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n\t\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```\n\n", "heading1": "Private Google Sheets", "source_page_url": "https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets", "source_page_title": "Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide"}, {"text": "google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```\n\nQuerying\n\nOnce you have the credentials `.json` file, you can use the following steps to query your Google Sheet:\n\n1\\. Click on the \"Share\" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the \"Get shareable link\" button. This will give you a URL that looks something like this:\n\n```html\nhttps://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0\n```\n\n2\\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`\n\n3\\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):\n\n```python\nimport gspread\nimport pandas as pd\n\nAuthenticate with Google and get the sheet\nURL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/editgid=0'\n\ngc = gspread.service_account(\"path/to/key.json\")\nsh = gc.open_by_url(URL)\nworksheet = sh.sheet1\n\ndef get_data():\n values = worksheet.get_all_values()\n df = pd.DataFrame(values[1:], columns=values[0])\n return df\n\n```\n\n4\\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio cod", "heading1": "Private Google Sheets", "source_page_url": "https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets", "source_page_title": "Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide"}, {"text": ". To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"\ud83d\udcc8 Real-Time Line Plot\")\n with gr.Row():\n with gr.Column():\n gr.DataFrame(get_data, every=gr.Timer(5))\n with gr.Column():\n gr.LinePlot(get_data, every=gr.Timer(5), x=\"Date\", y=\"Sales\", y_title=\"Sales ($ millions)\", overlay_point=True, width=500, height=500)\n\ndemo.queue().launch() Run the demo with queuing enabled\n```\n\nYou now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.\n\n", "heading1": "Private Google Sheets", "source_page_url": "https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets", "source_page_title": "Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide"}, {"text": "And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets", "source_page_title": "Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide"}, {"text": "First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.\n\n1\\. Start by creating a new project in Supabase. Once you're logged in, click the \"New Project\" button\n\n2\\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)\n\n3\\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).\n\n4\\. Click on \"Table Editor\" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:\n\n
\n\n\n\n\n\n
product_idint8
inventory_countint8
pricefloat8
product_namevarchar
\n
\n\n5\\. Click Save to save the table schema.\n\nOur table is now ready!\n\n", "heading1": "Create a table in Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.\n\n6\\. Install `supabase` by running the following command in your terminal:\n\n```bash\npip install supabase\n```\n\n7\\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)\n\n8\\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):\n\n```python\nimport supabase\n\nInitialize the Supabase client\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\nDefine the data to write\nimport random\n\nmain_list = []\nfor i in range(10):\n value = {'product_id': i,\n 'product_name': f\"Item {i}\",\n 'inventory_count': random.randint(1, 100),\n 'price': random.random()*100\n }\n main_list.append(value)\n\nWrite the data to the table\ndata = client.table('Product').insert(main_list).execute()\n```\n\nReturn to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!\n\n", "heading1": "Write data to Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.\n\nNote: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.\n\n9\\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:\n\n```python\nimport supabase\nimport pandas as pd\n\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\ndef read_data():\n response = client.table('Product').select(\"*\").execute()\n df = pd.DataFrame(response.data)\n return df\n```\n\n10\\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as dashboard:\n with gr.Row():\n gr.BarPlot(read_data, x=\"product_id\", y=\"price\", title=\"Prices\", every=gr.Timer(60))\n gr.BarPlot(read_data, x=\"product_id\", y=\"inventory_count\", title=\"Inventory\", every=gr.Timer(60))\n\ndashboard.queue().launch()\n```\n\nNotice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:\n\n\n\n", "heading1": "Visualize the Data in a Real-Time Gradio Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.\n\nTry adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.\n\nTo make it faster and more convenient to write your code, we've made it easier to \"reload\" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous \"magic command\" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).\n\nThis short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster.\n\n", "heading1": "Why Hot Reloading?", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.\n\nInstead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:\n\nIn the terminal, run `gradio run.py`. That's it!\n\nNow, you'll see that after you'll see something like this:\n\n```bash\nWatching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'\n\nRunning on local URL: http://127.0.0.1:7860\n```\n\nThe important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically \ud83e\udd73\n\nTip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.\n\nThere is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "emo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n my_demo.launch()\n```\n\nThen you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.\n\nBy default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:\n\n1. Configure encoding declaration of python script, for example: `-*- coding: cp1252 -*-`\n2. Confirm that your code editor has identified that encoding format. \n3. Run like this: `gradio run.py --encoding cp1252`\n\n\ud83d\udd25 If your application accepts command line arguments, you can pass them in as well. Here's an example:\n\n```python\nimport gradio as gr\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--name\", type=str, default=\"User\")\nargs, unknown = parser.parse_known_args()\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nWhich you could run like this: `gradio run.py --name Gretel`\n\nAs a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) \u2705\n\n\n", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "By default, reload mode will re-run your entire script for every change you make.\nBut there are some cases where this is not desirable.\nFor example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.\n\nIn these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.\n\nTip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.\n\n```python\nimport gradio as gr\n\nif gr.NO_RELOAD:\n\tfrom transformers import pipeline\n\tpipe = pipeline(\"text-classification\", model=\"cardiffnlp/twitter-roberta-base-sentiment-latest\")\n\ndemo = gr.Interface(lambda s: {d[\"label\"]: d[\"score\"] for d in pipe(s)}, gr.Textbox(), gr.Label())\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "Controlling the Reload \ud83c\udf9b\ufe0f", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "You can also enable Gradio's **Vibe Mode**, which, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. To enable this, simply run use the `--vibe` flag with Gradio, e.g. `gradio --vibe app.py`.\n\nVibe Mode lets you describe commands using natural language and have an LLM write or edit the code in your Gradio app. The LLM is powered by Hugging Face's [Inference Providers](https://huggingface.co/docs/inference-providers/en/index), so you must be logged into Hugging Face locally to use this. \n\nNote: When Vibe Mode is enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use only for local development.\n\n", "heading1": "Vibe Mode", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!\n\nWe've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:\n\n`%load_ext gradio`\n\nThen, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:\n\n```py\n%%blocks\n\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n```\n\nNotice that:\n\n- You do not need to launch your demo \u2014 Gradio does that for you automatically!\n\n- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.\n\nHere's what it looks like in a jupyter notebook:\n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/jupyter_reload.gif)\n\n\ud83e\ude84 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!\n\nTip: You may have to use `%%blocks --share` in Colab to get the demo to appear in the cell.\n\nThe Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.\n\n---\n\n", "heading1": "Jupyter Notebook Magic \ud83d\udd2e", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "Now that you know how to develop quickly using Gradio, start building your own!\n\nIf you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) \ud83e\udd17\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "Data visualization is a crucial aspect of data analysis and machine learning. The Gradio `DataFrame` component is a popular way to display tabular data within a web application. \n\nBut what if you want to stylize the table of data? What if you want to add background colors, partially highlight cells, or change the display precision of numbers? This Guide is for you!\n\n\n\nLet's dive in!\n\n**Prerequisites**: We'll be using the `gradio.Blocks` class in our examples.\nYou can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "The Gradio `DataFrame` component now supports values of the type `Styler` from the `pandas` class. This allows us to reuse the rich existing API and documentation of the `Styler` class instead of inventing a new style format on our own. Here's a complete example of how it looks:\n\n```python\nimport pandas as pd \nimport gradio as gr\n\nCreating a sample dataframe\ndf = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n}) \n\nApplying style to highlight the maximum value in each row\nstyler = df.style.highlight_max(color = 'lightgreen', axis = 0)\n\nDisplaying the styled dataframe in Gradio\nwith gr.Blocks() as demo:\n gr.DataFrame(styler)\n \ndemo.launch()\n```\n\nThe Styler class can be used to apply conditional formatting and styling to dataframes, making them more visually appealing and interpretable. You can highlight certain values, apply gradients, or even use custom CSS to style the DataFrame. The Styler object is applied to a DataFrame and it returns a new object with the relevant styling properties, which can then be previewed directly, or rendered dynamically in a Gradio interface.\n\nTo read more about the Styler object, read the official `pandas` documentation at: https://pandas.pydata.org/docs/user_guide/style.html\n\nBelow, we'll explore a few examples:\n\nHighlighting Cells\n\nOk, so let's revisit the previous example. We start by creating a `pd.DataFrame` object and then highlight the highest value in each row with a light green color:\n\n```python\nimport pandas as pd \n\nCreating a sample dataframe\ndf = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n}) \n\nApplying style to highlight the maximum value in each row\nstyler = df.style.highlight_max(color = 'lightgreen', axis = 0)\n```\n\nNow, we simply pass this object into the Gradio `DataFra", "heading1": "The Pandas `Styler`", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": ", 32, 23]\n}) \n\nApplying style to highlight the maximum value in each row\nstyler = df.style.highlight_max(color = 'lightgreen', axis = 0)\n```\n\nNow, we simply pass this object into the Gradio `DataFrame` and we can visualize our colorful table of data in 4 lines of python:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Dataframe(styler)\n \ndemo.launch()\n```\n\nHere's how it looks:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)\n\nFont Colors\n\nApart from highlighting cells, you might want to color specific text within the cells. Here's how you can change text colors for certain columns:\n\n```python\nimport pandas as pd \nimport gradio as gr\n\nCreating a sample dataframe\ndf = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n}) \n\nFunction to apply text color\ndef highlight_cols(x): \n df = x.copy() \n df.loc[:, :] = 'color: purple'\n df[['B', 'C', 'E']] = 'color: green'\n return df \n\nApplying the style function\ns = df.style.apply(highlight_cols, axis = None)\n\nDisplaying the styled dataframe in Gradio\nwith gr.Blocks() as demo:\n gr.DataFrame(s)\n \ndemo.launch()\n```\n\nIn this script, we define a custom function highlight_cols that changes the text color to purple for all cells, but overrides this for columns B, C, and E with green. Here's how it looks:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-color.png)\n\nDisplay Precision \n\nSometimes, the data you are dealing with might have long floating numbers, and you may want to display only a fixed number of decimals for simplicity. The pandas Styler object allows you to format the precision of numbers displayed. Here's how you can do this:\n\n```python\nimport pandas as pd\nimport gradio as gr\n\nCreating a sample dataframe with floating numbers\ndf = pd.DataFrame({\n \"A\" : [14.12345, 4.", "heading1": "The Pandas `Styler`", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "on of numbers displayed. Here's how you can do this:\n\n```python\nimport pandas as pd\nimport gradio as gr\n\nCreating a sample dataframe with floating numbers\ndf = pd.DataFrame({\n \"A\" : [14.12345, 4.23456, 5.34567, 4.45678, 1.56789], \n \"B\" : [5.67891, 2.78912, 54.89123, 3.91234, 2.12345], \n ... other columns\n}) \n\nSetting the precision of numbers to 2 decimal places\ns = df.style.format(\"{:.2f}\")\n\nDisplaying the styled dataframe in Gradio\nwith gr.Blocks() as demo:\n gr.DataFrame(s)\n \ndemo.launch()\n```\n\nIn this script, the format method of the Styler object is used to set the precision of numbers to two decimal places. Much cleaner now:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-precision.png)\n\n\n\n", "heading1": "The Pandas `Styler`", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "So far, we've been restricting ourselves to styling that is supported by the Pandas `Styler` class. But what if you want to create custom styles like partially highlighting cells based on their values:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/dataframe_custom_styling.png)\n\n\nThis isn't possible with `Styler`, but you can do this by creating your own **`styling`** array, which is a 2D array the same size and shape as your data. Each element in this list should be a CSS style string (e.g. `\"background-color: green\"`) that applies to the `` element containing the cell value (or an empty string if no custom CSS should be applied). Similarly, you can create a **`display_value`** array which controls the value that is displayed in each cell (which can be different the underlying value which is the one that is used for searching/sorting).\n\nHere's the complete code for how to can use custom styling with `gr.Dataframe` as in the screenshot above:\n\n$code_dataframe_custom_styling\n\n\n", "heading1": "Custom Styling", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "One thing to keep in mind is that the gradio `DataFrame` component only accepts custom styling objects when it is non-interactive (i.e. in \"static\" mode). If the `DataFrame` component is interactive, then the styling information is ignored and instead the raw table values are shown instead. \n\nThe `DataFrame` component is by default non-interactive, unless it is used as an input to an event. In which case, you can force the component to be non-interactive by setting the `interactive` prop like this:\n\n```python\nc = gr.DataFrame(styler, interactive=False)\n```\n\n", "heading1": "Note about Interactivity", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "This is just a taste of what's possible using the `gradio.DataFrame` component with the `Styler` class from `pandas`. Try it out and let us know what you think!", "heading1": "Conclusion \ud83c\udf89", "source_page_url": "https://gradio.app/guides/styling-the-gradio-dataframe", "source_page_title": "Other Tutorials - Styling The Gradio Dataframe Guide"}, {"text": "Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.\n\nState-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.\n\nLet's get started!\n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/image-classification-with-vision-transformers", "source_page_title": "Other Tutorials - Image Classification With Vision Transformers Guide"}, {"text": "First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.\n\nExpand the Tasks category on the left sidebar and select \"Image Classification\" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.\n\nAt the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.\n\n", "heading1": "Step 1 \u2014 Choosing a Vision Image Classification Model", "source_page_url": "https://gradio.app/guides/image-classification-with-vision-transformers", "source_page_title": "Other Tutorials - Image Classification With Vision Transformers Guide"}, {"text": "When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.\nAll of these are automatically inferred from the model tags.\n\nBesides the import statement, it only takes a single line of Python to load and launch the demo.\n\nWe use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.\n\n```python\nimport gradio as gr\n\ngr.Interface.load(\n \"huggingface/google/vit-base-patch16-224\",\n examples=[\"alligator.jpg\", \"laptop.jpg\"]).launch()\n```\n\nNotice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.\n\nThis produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!\n\n\n\n---\n\nAnd you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!\n", "heading1": "Step 2 \u2014 Loading the Vision Transformer Model with Gradio", "source_page_url": "https://gradio.app/guides/image-classification-with-vision-transformers", "source_page_title": "Other Tutorials - Image Classification With Vision Transformers Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of Gradio, and Hugging Face Spaces, and Wandb\n- How to setup a Gradio demo using the Wandb integration for JoJoGAN\n- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:\n\n\"Screen\n\n", "heading1": "What is Wandb?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.\n\nLet's get started!\n\n1. Create a W&B account\n\n Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don\u2019t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.\n\n2. Open Colab Install Gradio and W&B\n\n We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.\n\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)\n\n Install Gradio and Wandb at the top:\n\n ```sh\n pip install gradio wandb\n ```\n\n3. Finetune StyleGAN and W&B experiment tracking\n\n This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:\n\n ```python\n alpha = 1.0\n alpha = 1-alpha\n\n preserve_color = True\n num_iter = 100\n log_interval = 50\n\n samples = []\n column_names = [\"Reference (y)\", \"Style Code(w)\", \"Real Face Image(x)\"]\n\n wandb.init(project=\"JoJoGAN\")\n config = wandb.config\n config.num_iter = num_iter\n config.preserve_color = preserve_color\n wandb.log(\n {\"Style reference\": [wandb.Image(transforms.ToPILImage()(target_im))]},\n step=0)\n\n load discriminator for perceptual loss\n discriminator = Discriminator(1024, 2).eval().to(device)\n ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(),", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ": storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))\n\n Which layers to swap for generating a family of plausible real images -> fake image\n if preserve_color:\n id_swap = [9,11,15,16,17]\n else:\n id_swap = list(range(7, generator.n_latent))\n\n for idx in tqdm(range(num_iter)):\n mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)\n in_latent = latents.clone()\n in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]\n\n img = generator(in_latent, input_is_latent=True)\n\n with torch.no_grad():\n real_feat = discriminator(targets)\n fake_feat = discriminator(img)\n\n loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)\n\n wandb.log({\"loss\": loss}, step=idx)\n if idx % log_interval == 0:\n generator.eval()\n my_sample = generator(my_w, input_is_latent=True)\n generator.train()\n my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))\n wandb.log(\n {\"Current stylization\": [wandb.Image(my_sample)]},\n step=idx)\n table_data = [\n wandb.Image(transforms.ToPILImage()(target_im)),\n wandb.Image(img),\n wandb.Image(my_sample),\n ]\n samples.append(table_data)\n\n g_optim.zero_grad()\n loss.backward()\n g_optim.step()\n\n out_table = wandb.Table(data=samples, columns=column_names)\n wandb.log({\"Current Samples\": out_table})\n ```\n4. Save, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision impor", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "ave, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision import transforms, utils\n from util import *\n import math\n import random\n import numpy as np\n from torch import nn, autograd, optim\n from torch.nn import functional as F\n from tqdm import tqdm\n import lpips\n from model import *\n from e4e_projection import projection as e4e_projection\n \n from copy import deepcopy\n import imageio\n \n import os\n import sys\n import torchvision.transforms as transforms\n from argparse import Namespace\n from e4e.models.psp import pSp\n from util import *\n from huggingface_hub import hf_hub_download\n from google.colab import files\n \n torch.save({\"g\": generator.state_dict()}, \"your-model-name.pt\")\n \n files.download('your-model-name.pt')\n \n latent_dim = 512\n device=\"cuda\"\n model_path_s = hf_hub_download(repo_id=\"akhaliq/jojogan-stylegan2-ffhq-config-f\", filename=\"stylegan2-ffhq-config-f.pt\")\n original_generator = Generator(1024, latent_dim, 8, 2).to(device)\n ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)\n original_generator.load_state_dict(ckpt[\"g_ema\"], strict=False)\n mean_latent = original_generator.mean_latent(10000)\n \n generator = deepcopy(original_generator)\n \n ckpt = torch.load(\"/content/JoJoGAN/your-model-name.pt\", map_location=lambda storage, loc: storage)\n generator.load_state_dict(ckpt[\"g\"], strict=False)\n generator.eval()\n \n plt.rcParams['figure.dpi'] = 150\n \n transform = transforms.Compose(\n [\n transforms.Resize((1024, 1024)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ".5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)\n with torch.no_grad():\n my_sample = generator(my_w, input_is_latent=True)\n \n npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()\n imageio.imwrite('filename.jpeg', npimage)\n return 'filename.jpeg'\n ````\n\n5. Build a Gradio Demo\n\n ```python\n import gradio as gr\n \n title = \"JoJoGAN\"\n description = \"Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.\"\n \n demo = gr.Interface(\n inference,\n gr.Image(type=\"pil\"),\n gr.Image(type=\"file\"),\n title=title,\n description=description\n )\n \n demo.launch(share=True)\n ```\n\n6. Integrate Gradio into your W&B Dashboard\n\n The last step\u2014integrating your Gradio demo with your W&B dashboard\u2014is just one extra line:\n\n ```python\n demo.integrate(wandb=wandb)\n ```\n\n Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.\n\n Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:\n \n ```html\n \n ```\n\n7. (Optional) Embed W&B plots in your Gradio App\n\n It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and\n embed them within your Gradio app within a `gr.HTML` block.\n\n The Report will need to be public and you will need to wrap the URL within an iFrame like this:\n\n ```python\n import gradio as gr\n \n def wandb_report(url):\n iframe = f'\n{:else}\n \t\n{/if}\n```\n\nYou can also combine existing Gradio components to create entirely unique experiences.\nLike rendering a gallery of chatbot conversations. \nThe possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).\nWe'll be adding more packages and documentation over the coming weeks!\n\n", "heading1": "Leveraging Existing Gradio Components", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.\n\nFor those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.\n\n[Storybook Link](https://gradio.app/main/docs/js/storybook)\n\n", "heading1": "Matching Gradio Core's Design System", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.\n\nCurrently, it is possible to configure the following:\n\nVite options:\n- `plugins`: A list of vite plugins to use.\n\nSvelte options:\n- `preprocess`: A list of svelte preprocessors to use.\n- `extensions`: A list of file extensions to compile to `.svelte` files.\n- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/target) for more information.\n\nThe `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.\n\nExample for a Vite plugin\n\nCustom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information. \n\nHere we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease. \n\n```\nnpm install tailwindcss@next @tailwindcss/vite@next\n```\n\nIn `gradio.config.js`:\n\n```typescript\nimport tailwindcss from \"@tailwindcss/vite\";\nexport default {\n plugins: [tailwindcss()]\n};\n```\n\nThen create a `style.css` file with the following content:\n\n```css\n@import \"tailwindcss\";\n```\n\nImport this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `\n```\n\n2. Add the JavaScript\n\nThen, add the following JavaScript code (which uses the Gradio JavaScript Client to connect to the Space) to your website by including this in the `` section of your website:\n\n```html\n\n```\n\n3. That's it!\n\nYour website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app.\n\nCustomization\n\nYou can customize the appearance of the widget by modifying the CSS. Some ideas:\n- Change the colors to match your website's theme\n- Adjust the size and position of the widget\n- Add animations for opening/closing\n- Modify the message styling\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "The Slack bot will listen to messages mentioning it in channels. When it receives a message (which can include text as well as files), it will send it to your Gradio app via Gradio's built-in API. Your bot will reply with the response it receives from the API. \n\nBecause Gradio's API is very flexible, you can create Slack bots that support text, images, audio, streaming, chat history, and a wide variety of other features very easily. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif)\n\n", "heading1": "How does it work?", "source_page_url": "https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Slack Bot From A Gradio App Guide"}, {"text": "* Install the latest version of `gradio` and the `slack-bolt` library:\n\n```bash\npip install --upgrade gradio slack-bolt~=1.0\n```\n\n* Have a running Gradio app. This app can be running locally or on Hugging Face Spaces. In this example, we will be using the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which takes in an image and/or text and generates the code to generate the corresponding Gradio app.\n\nNow, we are ready to get started!\n\n1. Create a Slack App\n\n1. Go to [api.slack.com/apps](https://api.slack.com/apps) and click \"Create New App\"\n2. Choose \"From scratch\" and give your app a name\n3. Select the workspace where you want to develop your app\n4. Under \"OAuth & Permissions\", scroll to \"Scopes\" and add these Bot Token Scopes:\n - `app_mentions:read`\n - `chat:write`\n - `files:read`\n - `files:write`\n5. In the same \"OAuth & Permissions\" page, scroll back up and click the button to install the app to your workspace.\n6. Note the \"Bot User OAuth Token\" (starts with `xoxb-`) that appears as we'll need it later\n7. Click on \"Socket Mode\" in the menu bar. When the page loads, click the toggle to \"Enable Socket Mode\"\n8. Give your token a name, such as `socket-token` and copy the token that is generated (starts with `xapp-`) as we'll need it later.\n9. Finally, go to the \"Event Subscription\" option in the menu bar. Click the toggle to \"Enable Events\" and subscribe to the `app_mention` bot event.\n\n2. Write a Slack bot\n\nLet's start by writing a very simple Slack bot, just to make sure that everything is working. Write the following Python code in a file called `bot.py`, pasting the two tokens from step 6 and step 8 in the previous section.\n\n```py\nfrom slack_bolt import App\nfrom slack_bolt.adapter.socket_mode import SocketModeHandler\n\nSLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE\nSLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE\n\napp = App(token=SLACK_BOT_TOKEN)\n\n@app.event(\"app_mention\")\ndef handle_app_mention_ev", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Slack Bot From A Gradio App Guide"}, {"text": "eHandler\n\nSLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE\nSLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE\n\napp = App(token=SLACK_BOT_TOKEN)\n\n@app.event(\"app_mention\")\ndef handle_app_mention_events(body, say):\n user_id = body[\"event\"][\"user\"]\n say(f\"Hi <@{user_id}>! You mentioned me and said: {body['event']['text']}\")\n\nif __name__ == \"__main__\":\n handler = SocketModeHandler(app, SLACK_APP_TOKEN)\n handler.start()\n```\n\nIf that is working, we are ready to add Gradio-specific code. We will be using the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) to query the Gradio Playground Space mentioned above. Here's the updated `bot.py` file:\n\n```python\nfrom slack_bolt import App\nfrom slack_bolt.adapter.socket_mode import SocketModeHandler\n\nSLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE\nSLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE\n\napp = App(token=SLACK_BOT_TOKEN)\ngradio_client = Client(\"abidlabs/gradio-playground-bot\")\n\ndef download_image(url, filename):\n headers = {\"Authorization\": f\"Bearer {SLACK_BOT_TOKEN}\"}\n response = httpx.get(url, headers=headers)\n image_path = f\"./images/{filename}\"\n os.makedirs(\"./images\", exist_ok=True)\n with open(image_path, \"wb\") as f:\n f.write(response.content)\n return image_path\n\ndef slackify_message(message): \n Replace markdown links with slack format and remove code language specifier after triple backticks\n pattern = r'\\[(.*?)\\]\\((.*?)\\)'\n cleaned = re.sub(pattern, r'<\\2|\\1>', message)\n cleaned = re.sub(r'```\\w+\\n', '```', cleaned)\n return cleaned.strip()\n\n@app.event(\"app_mention\")\ndef handle_app_mention_events(body, say):\n Extract the message content without the bot mention\n text = body[\"event\"][\"text\"]\n bot_user_id = body[\"authorizations\"][0][\"user_id\"]\n clean_message = text.replace(f\"<@{bot_user_id}>\", \"\").strip()\n \n Handle images if present\n files = []\n if \"files\" in body[\"event\"]:\n for", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Slack Bot From A Gradio App Guide"}, {"text": "= body[\"authorizations\"][0][\"user_id\"]\n clean_message = text.replace(f\"<@{bot_user_id}>\", \"\").strip()\n \n Handle images if present\n files = []\n if \"files\" in body[\"event\"]:\n for file in body[\"event\"][\"files\"]:\n if file[\"filetype\"] in [\"png\", \"jpg\", \"jpeg\", \"gif\", \"webp\"]:\n image_path = download_image(file[\"url_private_download\"], file[\"name\"])\n files.append(handle_file(image_path))\n break\n \n Submit to Gradio and send responses back to Slack\n for response in gradio_client.submit(\n message={\"text\": clean_message, \"files\": files},\n ):\n cleaned_response = slackify_message(response[-1])\n say(cleaned_response)\n\nif __name__ == \"__main__\":\n handler = SocketModeHandler(app, SLACK_APP_TOKEN)\n handler.start()\n```\n3. Add the bot to your Slack Workplace\n\nNow, create a new channel or navigate to an existing channel in your Slack workspace where you want to use the bot. Click the \"+\" button next to \"Channels\" in your Slack sidebar and follow the prompts to create a new channel.\n\nFinally, invite your bot to the channel:\n1. In your new channel, type `/invite @YourBotName`\n2. Select your bot from the dropdown\n3. Click \"Invite to Channel\"\n\n4. That's it!\n\nNow you can mention your bot in any channel it's in, optionally attach an image, and it will respond with generated Gradio app code!\n\nThe bot will:\n1. Listen for mentions\n2. Process any attached images\n3. Send the text and images to your Gradio app\n4. Stream the responses back to the Slack channel\n\nThis is just a basic example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps!\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif)\n\nIf you build a Slack bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gr", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Slack Bot From A Gradio App Guide"}, {"text": "/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif)\n\nIf you build a Slack bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Slack Bot From A Gradio App Guide"}, {"text": "The Discord bot will listen to messages mentioning it in channels. When it receives a message (which can include text as well as files), it will send it to your Gradio app via Gradio's built-in API. Your bot will reply with the response it receives from the API. \n\nBecause Gradio's API is very flexible, you can create Discord bots that support text, images, audio, streaming, chat history, and a wide variety of other features very easily. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-18%20at%204.26.55%E2%80%AFPM.gif)\n\n", "heading1": "How does it work?", "source_page_url": "https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Discord Bot From A Gradio App Guide"}, {"text": "* Install the latest version of `gradio` and the `discord.py` libraries:\n\n```\npip install --upgrade gradio discord.py~=2.0\n```\n\n* Have a running Gradio app. This app can be running locally or on Hugging Face Spaces. In this example, we will be using the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which takes in an image and/or text and generates the code to generate the corresponding Gradio app.\n\nNow, we are ready to get started!\n\n\n1. Create a Discord application\n\nFirst, go to the [Discord apps dashboard](https://discord.com/developers/applications). Look for the \"New Application\" button and click it. Give your application a name, and then click \"Create\".\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-4.png)\n\nOn the resulting screen, you will see basic information about your application. Under the Settings section, click on the \"Bot\" option. You can update your bot's username if you would like.\n\nThen click on the \"Reset Token\" button. A new token will be generated. Copy it as we will need it for the next step.\n\nScroll down to the section that says \"Privileged Gateway Intents\". Your bot will need certain permissions to work correctly. In this tutorial, we will only be using the \"Message Content Intent\" so click the toggle to enable this intent. Save the changes.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-3.png)\n\n\n\n2. Write a Discord bot\n\nLet's start by writing a very simple Discord bot, just to make sure that everything is working. Write the following Python code in a file called `bot.py`, pasting the discord bot token from the previous step:\n\n```python\nbot.py\nimport discord\n\nTOKEN = PASTE YOUR DISCORD BOT TOKEN HERE\n\nclient = discord.Client()\n\n@client.event\nasync def on_ready():\n print(f'{client.user} has connected to Discord!')\n\nclient.run(TOKEN)\n```\n\nNow, run this file: `python bot.py`, w", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Discord Bot From A Gradio App Guide"}, {"text": "CORD BOT TOKEN HERE\n\nclient = discord.Client()\n\n@client.event\nasync def on_ready():\n print(f'{client.user} has connected to Discord!')\n\nclient.run(TOKEN)\n```\n\nNow, run this file: `python bot.py`, which should run and print a message like:\n\n```text\nWe have logged in as GradioPlaygroundBot1451\n```\n\nIf that is working, we are ready to add Gradio-specific code. We will be using the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) to query the Gradio Playground Space mentioned above. Here's the updated `bot.py` file:\n\n```python\nimport discord\nfrom gradio_client import Client, handle_file\nimport httpx\nimport os\n\nTOKEN = PASTE YOUR DISCORD BOT TOKEN HERE\n\nintents = discord.Intents.default()\nintents.message_content = True\n\nclient = discord.Client(intents=intents)\ngradio_client = Client(\"abidlabs/gradio-playground-bot\")\n\ndef download_image(attachment):\n response = httpx.get(attachment.url)\n image_path = f\"./images/{attachment.filename}\"\n os.makedirs(\"./images\", exist_ok=True)\n with open(image_path, \"wb\") as f:\n f.write(response.content)\n return image_path\n\n@client.event\nasync def on_ready():\n print(f'We have logged in as {client.user}')\n\n@client.event\nasync def on_message(message):\n Ignore messages from the bot itself\n if message.author == client.user:\n return\n\n Check if the bot is mentioned in the message and reply\n if client.user in message.mentions:\n Extract the message content without the bot mention\n clean_message = message.content.replace(f\"<@{client.user.id}>\", \"\").strip()\n\n Handle images (only the first image is used)\n files = []\n if message.attachments:\n for attachment in message.attachments:\n if any(attachment.filename.lower().endswith(ext) for ext in ['png', 'jpg', 'jpeg', 'gif', 'webp']):\n image_path = download_image(attachment)\n files.append(handle_file(image_path))", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Discord Bot From A Gradio App Guide"}, {"text": ".filename.lower().endswith(ext) for ext in ['png', 'jpg', 'jpeg', 'gif', 'webp']):\n image_path = download_image(attachment)\n files.append(handle_file(image_path))\n break\n \n Stream the responses to the channel\n for response in gradio_client.submit(\n message={\"text\": clean_message, \"files\": files},\n ):\n await message.channel.send(response[-1])\n\nclient.run(TOKEN)\n```\n\n3. Add the bot to your Discord Server\n\nNow we are ready to install the bot on our server. Go back to the [Discord apps dashboard](https://discord.com/developers/applications). Under the Settings section, click on the \"OAuth2\" option. Scroll down to the \"OAuth2 URL Generator\" box and select the \"bot\" checkbox:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-2.png)\n\n\n\nThen in \"Bot Permissions\" box that pops up underneath, enable the following permissions:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-1.png)\n\n\nCopy the generated URL that appears underneath, which should look something like:\n\n```text\nhttps://discord.com/oauth2/authorize?client_id=1319011745452265575&permissions=377957238784&integration_type=0&scope=bot\n```\n\nPaste it into your browser, which should allow you to add the Discord bot to any Discord server that you manage.\n\n\n4. That's it!\n\nNow you can mention your bot from any channel in your Discord server, optionally attach an image, and it will respond with generated Gradio app code!\n\nThe bot will:\n1. Listen for mentions\n2. Process any attached images\n3. Send the text and images to your Gradio app\n4. Stream the responses back to the Discord channel\n\n This is just a basic example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Discord Bot From A Gradio App Guide"}, {"text": "c example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-18%20at%204.26.55%E2%80%AFPM.gif)\n\nIf you build a Discord bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app", "source_page_title": "Chatbots - Creating A Discord Bot From A Gradio App Guide"}, {"text": "First, we'll build the UI without handling these events and build from there. \nWe'll use the Hugging Face InferenceClient in order to get started without setting up\nany API keys.\n\nThis is what the first draft of our application looks like:\n\n```python\nfrom huggingface_hub import InferenceClient\nimport gradio as gr\n\nclient = InferenceClient()\n\ndef respond(\n prompt: str,\n history,\n):\n if not history:\n history = [{\"role\": \"system\", \"content\": \"You are a friendly chatbot\"}]\n history.append({\"role\": \"user\", \"content\": prompt})\n\n yield history\n\n response = {\"role\": \"assistant\", \"content\": \"\"}\n for message in client.chat_completion( type: ignore\n history,\n temperature=0.95,\n top_p=0.9,\n max_tokens=512,\n stream=True,\n model=\"openai/gpt-oss-20b\"\n ):\n response[\"content\"] += message.choices[0].delta.content or \"\" if message.choices else \"\"\n yield history + [response]\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with GPT-OSS 20b \ud83e\udd17\")\n chatbot = gr.Chatbot(\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/376/hugging-face_1f917.png\",\n ),\n )\n prompt = gr.Textbox(max_lines=1, label=\"Chat Message\")\n prompt.submit(respond, [prompt, chatbot], [chatbot])\n prompt.submit(lambda: \"\", None, [prompt])\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "Our undo event will populate the textbox with the previous user message and also remove all subsequent assistant responses.\n\nIn order to know the index of the last user message, we can pass `gr.UndoData` to our event handler function like so:\n\n```python\ndef handle_undo(history, undo_data: gr.UndoData):\n return history[:undo_data.index], history[undo_data.index]['content'][0][\"text\"]\n```\n\nWe then pass this function to the `undo` event!\n\n```python\n chatbot.undo(handle_undo, chatbot, [chatbot, prompt])\n```\n\nYou'll notice that every bot response will now have an \"undo icon\" you can use to undo the response - \n\n![undo_event](https://github.com/user-attachments/assets/180b5302-bc4a-4c3e-903c-f14ec2adcaa6)\n\nTip: You can also access the content of the user message with `undo_data.value`\n\n", "heading1": "The Undo Event", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "The retry event will work similarly. We'll use `gr.RetryData` to get the index of the previous user message and remove all the subsequent messages from the history. Then we'll use the `respond` function to generate a new response. We could also get the previous prompt via the `value` property of `gr.RetryData`.\n\n```python\ndef handle_retry(history, retry_data: gr.RetryData):\n new_history = history[:retry_data.index]\n previous_prompt = history[retry_data.index]['content'][0][\"text\"]\n yield from respond(previous_prompt, new_history)\n...\n\nchatbot.retry(handle_retry, chatbot, chatbot)\n```\n\nYou'll see that the bot messages have a \"retry\" icon now -\n\n![retry_event](https://github.com/user-attachments/assets/cec386a7-c4cd-4fb3-a2d7-78fd806ceac6)\n\nTip: The Hugging Face inference API caches responses, so in this demo, the retry button will not generate a new response.\n\n", "heading1": "The Retry Event", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "By now you should hopefully be seeing the pattern!\nTo let users like a message, we'll add a `.like` event to our chatbot.\nWe'll pass it a function that accepts a `gr.LikeData` object.\nIn this case, we'll just print the message that was either liked or disliked.\n\n```python\ndef handle_like(data: gr.LikeData):\n if data.liked:\n print(\"You upvoted this response: \", data.value)\n else:\n print(\"You downvoted this response: \", data.value)\n\nchatbot.like(handle_like, None, None)\n```\n\n", "heading1": "The Like Event", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "Same idea with the edit listener! with `gr.Chatbot(editable=True)`, you can capture user edits. The `gr.EditData` object tells us the index of the message edited and the new text of the mssage. Below, we use this object to edit the history, and delete any subsequent messages. \n\n```python\ndef handle_edit(history, edit_data: gr.EditData):\n new_history = history[:edit_data.index]\n new_history[-1]['content'] = [{\"text\": edit_data.value, \"type\": \"text\"}]\n return new_history\n\n...\n\nchatbot.edit(handle_edit, chatbot, chatbot)\n```\n\n", "heading1": "The Edit Event", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "As a bonus, we'll also cover the `.clear()` event, which is triggered when the user clicks the clear icon to clear all messages. As a developer, you can attach additional events that should happen when this icon is clicked, e.g. to handle clearing of additional chatbot state:\n\n```python\nfrom uuid import uuid4\nimport gradio as gr\n\n\ndef clear():\n print(\"Cleared uuid\")\n return uuid4()\n\n\ndef chat_fn(user_input, history, uuid):\n return f\"{user_input} with uuid {uuid}\"\n\n\nwith gr.Blocks() as demo:\n uuid_state = gr.State(\n uuid4\n )\n chatbot = gr.Chatbot()\n chatbot.clear(clear, outputs=[uuid_state])\n\n gr.ChatInterface(\n chat_fn,\n additional_inputs=[uuid_state],\n chatbot=chatbot,\n )\n\ndemo.launch()\n```\n\nIn this example, the `clear` function, bound to the `chatbot.clear` event, returns a new UUID into our session state, when the chat history is cleared via the trash icon. This can be seen in the `chat_fn` function, which references the UUID saved in our session state.\n\nThis example also shows that you can use these events with `gr.ChatInterface` by passing in a custom `gr.Chatbot` object.\n\n", "heading1": "The Clear Event", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "That's it! You now know how you can implement the retry, undo, like, and clear events for the Chatbot.\n\n\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/chatbot-specific-events", "source_page_title": "Chatbots - Chatbot Specific Events Guide"}, {"text": "Let's start by using `llama-index` on top of `openai` to build a RAG chatbot on any text or PDF files that you can demo and share in less than 30 lines of code. You'll need to have an OpenAI key for this example (keep reading for the free, open-source equivalent!)\n\n$code_llm_llamaindex\n\n", "heading1": "Llama Index", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "Here's an example using `langchain` on top of `openai` to build a general-purpose chatbot. As before, you'll need to have an OpenAI key for this example.\n\n$code_llm_langchain\n\nTip: For quick prototyping, the community-maintained langchain-gradio repo makes it even easier to build chatbots on top of LangChain.\n\n", "heading1": "LangChain", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "Of course, we could also use the `openai` library directy. Here a similar example to the LangChain , but this time with streaming as well:\n\nTip: For quick prototyping, the openai-gradio library makes it even easier to build chatbots on top of OpenAI models.\n\n\n", "heading1": "OpenAI", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "Of course, in many cases you want to run a chatbot locally. Here's the equivalent example using the SmolLM2-135M-Instruct model using the Hugging Face `transformers` library.\n\n$code_llm_hf_transformers\n\n", "heading1": "Hugging Face `transformers`", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "The SambaNova Cloud API provides access to full-precision open-source models, such as the Llama family. Here's an example of how to build a Gradio app around the SambaNova API\n\n$code_llm_sambanova\n\nTip: For quick prototyping, the sambanova-gradio library makes it even easier to build chatbots on top of SambaNova models.\n\n", "heading1": "SambaNova", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "The Hyperbolic AI API provides access to many open-source models, such as the Llama family. Here's an example of how to build a Gradio app around the Hyperbolic\n\n$code_llm_hyperbolic\n\nTip: For quick prototyping, the hyperbolic-gradio library makes it even easier to build chatbots on top of Hyperbolic models.\n\n\n", "heading1": "Hyperbolic", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "Anthropic's Claude model can also be used via API. Here's a simple 20 questions-style game built on top of the Anthropic API:\n\n$code_llm_claude\n\n\n", "heading1": "Anthropic's Claude", "source_page_url": "https://gradio.app/guides/chatinterface-examples", "source_page_title": "Chatbots - Chatinterface Examples Guide"}, {"text": "Every element of the chatbot value is a dictionary of `role` and `content` keys. You can always use plain python dictionaries to add new values to the chatbot but Gradio also provides the `ChatMessage` dataclass to help you with IDE autocompletion. The schema of `ChatMessage` is as follows:\n\n ```py\nMessageContent = Union[str, FileDataDict, FileData, Component]\n\n@dataclass\nclass ChatMessage:\n content: MessageContent | [MessageContent]\n role: Literal[\"user\", \"assistant\"]\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n\n\nFor our purposes, the most important key is the `metadata` key, which accepts a dictionary. If this dictionary includes a `title` for the message, it will be displayed in a collapsible accordion representing a thought. It's that simple! Take a look at this example:\n\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(\n value=[\n gr.ChatMessage(\n role=\"user\", \n content=\"What is the weather in San Francisco?\"\n ),\n gr.ChatMessage(\n role=\"assistant\", \n content=\"I need to use the weather API tool?\",\n metadata={\"title\": \"\ud83e\udde0 Thinking\"}\n )\n ]\n )\n\ndemo.launch()\n```\n\n\n\nIn addition to `title`, the dictionary provided to `metadata` can take several optional keys:\n\n* `log`: an optional string value to be displayed in a subdued font next to the thought title.\n* `duration`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title.\n* `status`: if set to `", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "tion`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title.\n* `status`: if set to `\"pending\"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `\"done\"`, the thought accordion is initialized closed. If `status` is not provided, the thought accordion is initialized open and no spinner is displayed.\n* `id` and `parent_id`: if these are provided, they can be used to nest thoughts inside other thoughts.\n\nBelow, we show several complete examples of using `gr.Chatbot` and `gr.ChatInterface` to display tool use or thinking UIs.\n\n", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "A real example using transformers.agents\n\nWe'll create a Gradio application simple agent that has access to a text-to-image tool.\n\nTip: Make sure you read the [smolagents documentation](https://huggingface.co/docs/smolagents/index) first\n\nWe'll start by importing the necessary classes from transformers and gradio. \n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom transformers import Tool, ReactCodeAgent type: ignore\nfrom transformers.agents import stream_to_gradio, HfApiEngine type: ignore\n\nImport tool from Hub\nimage_generation_tool = Tool.from_space(\n space_id=\"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generates an image following your prompt. Returns a PIL Image.\",\n api_name=\"/infer\",\n)\n\nllm_engine = HfApiEngine(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nInitialize the agent with both tools and engine\nagent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)\n```\n\nThen we'll build the UI:\n\n```python\ndef interact_with_agent(prompt, history):\n messages = []\n yield messages\n for msg in stream_to_gradio(agent, prompt):\n messages.append(asdict(msg))\n yield messages\n yield messages\n\n\ndemo = gr.ChatInterface(\n interact_with_agent,\n chatbot= gr.Chatbot(\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png\",\n ),\n ),\n examples=[\n [\"Generate an image of an astronaut riding an alligator\"],\n [\"I am writing a children's book for my daughter. Can you help me with some illustrations?\"],\n ],\n)\n```\n\nYou can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py).\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search eng", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "om/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search engine.\n\nWe'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with the following environment variables set - \n\n```\nSERPAPI_API_KEY=\nHF_TOKEN=\nOPENAI_API_KEY=\n```\n\n```python\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools\nfrom langchain_openai import ChatOpenAI\nfrom gradio import ChatMessage\nimport gradio as gr\n\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nmodel = ChatOpenAI(temperature=0, streaming=True)\n\ntools = load_tools([\"serpapi\"])\n\nGet the prompt to use - you can modify this!\nprompt = hub.pull(\"hwchase17/openai-tools-agent\")\nagent = create_openai_tools_agent(\n model.with_config({\"tags\": [\"agent_llm\"]}), tools, prompt\n)\nagent_executor = AgentExecutor(agent=agent, tools=tools).with_config(\n {\"run_name\": \"Agent\"}\n)\n```\n\nThen we'll create the Gradio UI\n\n```python\nasync def interact_with_langchain_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n async for chunk in agent_executor.astream(\n {\"input\": prompt}\n ):\n if \"steps\" in chunk:\n for step in chunk[\"steps\"]:\n messages.append(ChatMessage(role=\"assistant\", content=step.action.log,\n metadata={\"title\": f\"\ud83d\udee0\ufe0f Used tool {step.action.tool}\"}))\n yield messages\n if \"output\" in chunk:\n messages.append(ChatMessage(role=\"assistant\", content=chunk[\"output\"]))\n yield messages\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "\ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ),\n )\n input = gr.Textbox(lines=1, label=\"Chat Message\")\n input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2])\n\ndemo.launch()\n```\n\n![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67)\n\nThat's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).\n\n\n", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "The Gradio Chatbot can natively display intermediate thoughts of a _thinking_ LLM. This makes it perfect for creating UIs that show how an AI model \"thinks\" while generating responses. Below guide will show you how to build a chatbot that displays Gemini AI's thought process in real-time.\n\n\nA real example using Gemini 2.0 Flash Thinking API\n\nLet's create a complete chatbot that shows its thoughts and responses in real-time. We'll use Google's Gemini API for accessing Gemini 2.0 Flash Thinking LLM and Gradio for the UI.\n\nWe'll begin with imports and setting up the gemini client. Note that you'll need to [acquire a Google Gemini API key](https://aistudio.google.com/apikey) first -\n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom typing import Iterator\nimport google.generativeai as genai\n\ngenai.configure(api_key=\"your-gemini-api-key\")\nmodel = genai.GenerativeModel(\"gemini-2.0-flash-thinking-exp-1219\")\n```\n\nFirst, let's set up our streaming function that handles the model's output:\n\n```python\ndef stream_gemini_response(user_message: str, messages: list) -> Iterator[list]:\n \"\"\"\n Streams both thoughts and responses from the Gemini model.\n \"\"\"\n Initialize response from Gemini\n response = model.generate_content(user_message, stream=True)\n \n Initialize buffers\n thought_buffer = \"\"\n response_buffer = \"\"\n thinking_complete = False\n \n Add initial thinking message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=\"\",\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n )\n \n for chunk in response:\n parts = chunk.candidates[0].content.parts\n current_chunk = parts[0].text\n \n if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n rol", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n Add response message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=parts[1].text\n )\n )\n thinking_complete = True\n \n elif thinking_complete:\n Continue streaming response\n response_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=response_buffer\n )\n \n else:\n Continue streaming thoughts\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n yield messages\n```\n\nThen, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Gemini 2.0 Flash and See its Thoughts \ud83d\udcad\")\n \n chatbot = gr.Chatbot(\n label=\"Gemini2.0 'Thinking' Chatbot\",\n render_markdown=True,\n )\n \n input_box = gr.Textbox(\n lines=1,\n label=\"Chat Message\",\n placeholder=\"Type your message here and press Enter...\"\n )\n \n Set up event handlers\n msg_store = gr.State(\"\") Store for preserving user message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, input_box],\n queue=Fa", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, input_box],\n queue=False\n ).then(\n user_message, Add user message to chat\n inputs=[msg_store, chatbot],\n outputs=[input_box, chatbot],\n queue=False\n ).then(\n stream_gemini_response, Generate and stream response\n inputs=[msg_store, chatbot],\n outputs=chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n\n- Displays the model's thoughts in a collapsible section\n- Streams the thoughts and final response in real-time\n- Maintains a clean chat history\n\n That's it! You now have a chatbot that not only responds to users but also shows its thinking process, creating a more transparent and engaging interaction. See our finished Gemini 2.0 Flash Thinking demo [here](https://huggingface.co/spaces/ysharma/Gemini2-Flash-Thinking).\n\n\n Building with Citations \n\nThe Gradio Chatbot can display citations from LLM responses, making it perfect for creating UIs that show source documentation and references. This guide will show you how to build a chatbot that displays Claude's citations in real-time.\n\nA real example using Anthropic's Citations API\nLet's create a complete chatbot that shows both responses and their supporting citations. We'll use Anthropic's Claude API with citations enabled and Gradio for the UI.\n\nWe'll begin with imports and setting up the Anthropic client. Note that you'll need an `ANTHROPIC_API_KEY` environment variable set:\n\n```python\nimport gradio as gr\nimport anthropic\nimport base64\nfrom typing import List, Dict, Any\n\nclient = anthropic.Anthropic()\n```\n\nFirst, let's set up our message formatting functions that handle document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n with open(file_obj.na", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n with open(file_obj.name, 'rb') as f:\n return base64.b64encode(f.read()).decode('utf-8')\n\ndef format_message_history(\n history: list, \n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict]:\n \"\"\"Convert Gradio chat history to Anthropic message format.\"\"\"\n formatted_messages = []\n \n Add previous messages\n for msg in history[:-1]:\n if msg[\"role\"] == \"user\":\n formatted_messages.append({\"role\": \"user\", \"content\": msg[\"content\"]})\n \n Prepare the latest message with document\n latest_message = {\"role\": \"user\", \"content\": []}\n \n if enable_citations:\n if doc_type == \"plain_text\":\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"text\",\n \"media_type\": \"text/plain\",\n \"data\": text_input.strip()\n },\n \"title\": \"Text Document\",\n \"citations\": {\"enabled\": True}\n })\n elif doc_type == \"pdf\" and pdf_file:\n pdf_data = encode_pdf_to_base64(pdf_file)\n if pdf_data:\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"base64\",\n \"media_type\": \"application/pdf\",\n \"data\": pdf_data\n },\n \"title\": pdf_file.name,\n \"citations\": {\"enabled\": True}\n })\n \n Add the user's question\n latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, let's create our bot resp", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, let's create our bot response handler that processes citations:\n\n```python\ndef bot_response(\n history: list,\n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict[str, Any]]:\n try:\n messages = format_message_history(history, enable_citations, doc_type, text_input, pdf_file)\n response = client.messages.create(model=\"claude-3-5-sonnet-20241022\", max_tokens=1024, messages=messages)\n \n Initialize main response and citations\n main_response = \"\"\n citations = []\n \n Process each content block\n for block in response.content:\n if block.type == \"text\":\n main_response += block.text\n if enable_citations and hasattr(block, 'citations') and block.citations:\n for citation in block.citations:\n if citation.cited_text not in citations:\n citations.append(citation.cited_text)\n \n Add main response\n history.append({\"role\": \"assistant\", \"content\": main_response})\n \n Add citations in a collapsible section\n if enable_citations and citations:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"\\n\".join([f\"\u2022 {cite}\" for cite in citations]),\n \"metadata\": {\"title\": \"\ud83d\udcda Citations\"}\n })\n \n return history\n \n except Exception as e:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"I apologize, but I encountered an error while processing your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\")\n \n with gr.Row(sc", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\")\n \n with gr.Row(scale=1):\n with gr.Column(scale=4):\n chatbot = gr.Chatbot(bubble_full_width=False, show_label=False, scale=1)\n msg = gr.Textbox(placeholder=\"Enter your message here...\", show_label=False, container=False)\n \n with gr.Column(scale=1):\n enable_citations = gr.Checkbox(label=\"Enable Citations\", value=True, info=\"Toggle citation functionality\" )\n doc_type_radio = gr.Radio( choices=[\"plain_text\", \"pdf\"], value=\"plain_text\", label=\"Document Type\", info=\"Choose the type of document to use\")\n text_input = gr.Textbox(label=\"Document Content\", lines=10, info=\"Enter the text you want to reference\")\n pdf_input = gr.File(label=\"Upload PDF\", file_types=[\".pdf\"], file_count=\"single\", visible=False)\n \n Handle message submission\n msg.submit(\n user_message,\n [msg, chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n [msg, chatbot]\n ).then(\n bot_response,\n [chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n- Supports both plain text and PDF documents for Claude to cite from \n- Displays Citations in collapsible sections using our `metadata` feature\n- Shows source quotes directly from the given documents\n\nThe citations feature works particularly well with the Gradio Chatbot's `metadata` support, allowing us to create collapsible sections that keep the chat interface clean while still providing easy access to source documentation.\n\nThat's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/a", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "tbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/anthropic-citations-with-gradio-metadata-key).\n\n", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "**Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast).\n\nThis tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this:\n\n$demo_chatbot_streaming\n\n**Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo.\nYou can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/creating-a-custom-chatbot-with-blocks", "source_page_title": "Chatbots - Creating A Custom Chatbot With Blocks Guide"}, {"text": "Let's start with recreating the simple demo above. As you may have noticed, our bot simply randomly responds \"How are you?\", \"Today is a great day\", or \"I'm very hungry\" to any input. Here's the code to create this with Gradio:\n\n$code_chatbot_simple\n\nThere are three Gradio components here:\n\n- A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot.\n- A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response\n- A `ClearButton` button to clear the Textbox and entire Chatbot history\n\nWe have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns.\n\nOf course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response.\n\n$demo_chatbot_simple\n\nTip: For better type hinting and auto-completion in your IDE, you can use the `gr.ChatMessage` dataclass:\n\n```python\nfrom gradio import ChatMessage\n\ndef chat_function(message, history):\n history.append(ChatMessage(role=\"user\", content=message))\n history.append(ChatMessage(role=\"assistant\", content=\"Hello, how can I help you?\"))\n return history\n```\n\n", "heading1": "A Simple Chatbot Demo", "source_page_url": "https://gradio.app/guides/creating-a-custom-chatbot-with-blocks", "source_page_title": "Chatbots - Creating A Custom Chatbot With Blocks Guide"}, {"text": "There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:\n\n$code_chatbot_streaming\n\nYou'll notice that when a user submits their message, we now _chain_ two event events with `.then()`:\n\n1. The first method `user()` updates the chatbot with the user message and clears the input field. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `{\"role\": \"user\", \"content\": user_message}`.\n\n2. The second method, `bot()` updates the chatbot history with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/iterative-outputs).\n\n\nOf course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.\n\n\n", "heading1": "Add Streaming to your Chatbot", "source_page_url": "https://gradio.app/guides/creating-a-custom-chatbot-with-blocks", "source_page_title": "Chatbots - Creating A Custom Chatbot With Blocks Guide"}, {"text": "The `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this:\n\n```py\ndef bot(history):\n response = {\"role\": \"assistant\", \"content\": \"**That's cool!**\"}\n history.append(response)\n return history\n```\n\nIn addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. To pass in a media file, we must pass in the file a dictionary with a `path` key pointing to a local file and an `alt_text` key. The `alt_text` is optional, so you can also just pass in a tuple with a single element `{\"path\": \"filepath\"}`, like this:\n\n```python\ndef add_message(history, message):\n for x in message[\"files\"]:\n history.append({\"role\": \"user\", \"content\": {\"path\": x}})\n if message[\"text\"] is not None:\n history.append({\"role\": \"user\", \"content\": message[\"text\"]})\n return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=[\"image\"], sources=[\"upload\", \"microphone\"])\n```\n\nPutting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before:\n\n$code_chatbot_multimodal\n$demo_chatbot_multimodal\n\nAnd you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible:\n\n- [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.\n- [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Ow", "heading1": "Adding Markdown, Images, Audio, or Videos", "source_page_url": "https://gradio.app/guides/creating-a-custom-chatbot-with-blocks", "source_page_title": "Chatbots - Creating A Custom Chatbot With Blocks Guide"}, {"text": "ingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.\n- [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.\n", "heading1": "Adding Markdown, Images, Audio, or Videos", "source_page_url": "https://gradio.app/guides/creating-a-custom-chatbot-with-blocks", "source_page_title": "Chatbots - Creating A Custom Chatbot With Blocks Guide"}, {"text": "Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI.\n\nThis tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization.\n\n**Prerequisites**: please make sure you are using the latest version of Gradio:\n\n```bash\n$ pip install --upgrade gradio\n```\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token:\n\n```python\nimport gradio as gr\n\ngr.load_chat(\"http://localhost:11434/v1/\", model=\"llama3.2\", token=\"***\").launch()\n```\n\nRead about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!\n\n", "heading1": "Note for OpenAI-API compatible endpoints", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order).\n\n- `message`: a `str` representing the user's most recent message.\n- `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata.\n\nThe `history` would look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"What is the capital of France?\"}]},\n {\"role\": \"assistant\", \"content\": [{\"type\": \"text\", \"text\": \"Paris\"}]}\n]\n```\n\nwhile the next `message` would be:\n\n```py\n\"And what is its largest city?\"\n```\n\nYour chat function simply needs to return: \n\n* a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case:\n\n```\nParis is also the largest city.\n```\n\nLet's take a look at a few example chat functions:\n\n**Example: a chatbot that randomly responds with yes or no**\n\nLet's write a chat function that responds `Yes` or `No` randomly.\n\nHere's our chat function:\n\n```python\nimport random\n\ndef random_response(message, history):\n return random.choice([\"Yes\", \"No\"])\n```\n\nNow, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:\n\n```python\nimport gradio as gr\n\ngr.ChatInterface(\n fn=random_response, \n).launch()\n```\n\nThat's it! Here's our running demo, try it out:\n\n$demo_chatinterface_random_response\n\n**Example: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingl", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "t take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingly_agree(message, history):\n if len([h for h in history if h['role'] == \"assistant\"]) % 2 == 0:\n return f\"Yes, I do think that: {message}\"\n else:\n return \"I don't think so\"\n\ngr.ChatInterface(\n fn=alternatingly_agree, \n).launch()\n```\n\nWe'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples). \n\n", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!\n\n```python\nimport time\nimport gradio as gr\n\ndef slow_echo(message, history):\n for i in range(len(message)):\n time.sleep(0.3)\n yield \"You typed: \" + message[: i+1]\n\ngr.ChatInterface(\n fn=slow_echo, \n).launch()\n```\n\nWhile the response is streaming, the \"Submit\" button turns into a \"Stop\" button that can be used to stop the generator function.\n\nTip: Even though you are yielding the latest message at each iteration, Gradio only sends the \"diff\" of each message from the server to the frontend, which reduces latency and data consumption over your network.\n\n", "heading1": "Streaming chatbots", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:\n\n- add a title and description above your chatbot using `title` and `description` arguments.\n- add a theme or custom css using `theme` and `css` arguments respectively in the `launch()` method.\n- add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out.\n- customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder).\n\n**Adding examples**\n\nYou can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as \"buttons\" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{\"text\": \"What's in this image?\", \"files\": [\"cheetah.jpg\"]}`. Each file will be a separate message that is added to your Chatbot history.\n\nYou can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list.\n\nIf you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`.\n\n**Customizing the chatbot or textbox component**\n\nIf you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n ", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "le of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything!\"\n\ngr.ChatInterface(\n yes_man,\n chatbot=gr.Chatbot(height=300),\n textbox=gr.Textbox(placeholder=\"Ask me a yes or no question\", container=False, scale=7),\n title=\"Yes Man\",\n description=\"Ask Yes Man any question\",\n examples=[\"Hello\", \"Am I cool?\", \"Are tomatoes vegetables?\"],\n cache_examples=True,\n).launch(theme=\"ocean\")\n```\n\nHere's another example that adds a \"placeholder\" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML:\n\n```python\ngr.ChatInterface(\n yes_man,\n chatbot=gr.Chatbot(placeholder=\"Your Personal Yes-Man
Ask Me Anything\"),\n...\n```\n\nThe placeholder appears vertically and horizontally centered in the chatbot.\n\n", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot \"multimodal\" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.\n\nWhen `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: \n\n```py\n{\n \"text\": \"user input\", \n \"files\": [\n \"updated_file_1_path.ext\",\n \"updated_file_2_path.ext\", \n ...\n ]\n}\n```\n\nThis second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key will be a dictionary with a \"type\" key whose value is \"file\" and the file will be represented as a dictionary. All the files will be grouped in message in the history. So after uploading two files and asking a question, your history might look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": [{\"type\": \"file\", \"file\": {\"path\": \"cat1.png\"}},\n {\"type\": \"file\", \"file\": {\"path\": \"cat1.png\"}},\n {\"type\": \"text\", \"text\": \"What's the difference between these two images?\"}]}\n]\n```\n\nThe return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses).\n\nIf you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, history):\n num_images = len(message[\"files\"])\n total_images = 0\n for message in history:\n for content in message[\"content\"]:\n if content[\"type\"] == \"file\":\n total_images += 1\n return f\"You just uploaded {num_images} images, total uploaded: {total_images+num_images}\"\n\ndemo = gr.ChatInterface(\n fn=count_images, \n examples=[\n {\"text\": \"No files\", \"files\": []}\n ], \n multimodal=True,\n textbox=gr.MultimodalTextbox(file_count=\"multiple\", file_types=[\"image\"], sources=[\"upload\", \"microphone\"])\n)\n\ndemo.launch()\n```\n\n", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.\n\nThe `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `\"textbox\"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. \n\nHere's a complete example:\n\n$code_chatinterface_system_prompt\n\nIf the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.\n\n```python\nimport gradio as gr\nimport time\n\ndef echo(message, history, system_prompt, tokens):\n response = f\"System prompt: {system_prompt}\\n Message: {message}.\"\n for i in range(min(len(response), int(tokens))):\n time.sleep(0.05)\n yield response[: i+1]\n\nwith gr.Blocks() as demo:\n system_prompt = gr.Textbox(\"You are helpful AI.\", label=\"System Prompt\")\n slider = gr.Slider(10, 100, render=False)\n\n gr.ChatInterface(\n echo, additional_inputs=[system_prompt, slider],\n )\n\ndemo.launch()\n```\n\n**Examples with additional inputs**\n\nYou can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example v", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "s to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface.\n\nIf you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).\n\n", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component:\n\n$code_chatinterface_artifacts\n\n**Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.\n\n", "heading1": "Additional Outputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below:\n\n\n**Returning files or Gradio components**\n\nCurrently, the following Gradio components can be displayed inside the chat interface:\n* `gr.Image`\n* `gr.Plot`\n* `gr.Audio`\n* `gr.HTML`\n* `gr.Video`\n* `gr.Gallery`\n* `gr.File`\n\nSimply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file:\n\n```py\nimport gradio as gr\n\ndef music(message, history):\n if message.strip():\n return gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav\")\n else:\n return \"Please provide the name of an artist\"\n\ngr.ChatInterface(\n music,\n textbox=gr.Textbox(placeholder=\"Which artist's music do you want to listen to?\", scale=7),\n).launch()\n```\n\nSimilarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component.\n\n**Returning Multiple Messages**\n\nYou can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example:\n\n$code_chatinterface_echo_multimodal\n\n\n**Displaying intermediate thoughts or tool usage**\n\nThe `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png)\n\n To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\nMessageContent = Union[str, FileDataDict, FileData, Component]\n\n@dataclass\nclass ChatMessage:\n content: Me", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ma of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\nMessageContent = Union[str, FileDataDict, FileData, Component]\n\n@dataclass\nclass ChatMessage:\n content: MessageContent | list[MessageContent]\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n \nAs you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a \"content\" key that refers to the chat message content. But it also includes a \"metadata\" key whose value is a dictionary. If this dictionary includes a \"title\" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage:\n\n$code_chatinterface_thoughts\n\nYou can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include \"id\" and \"parent_id\" keys in the \"metadata\" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples.\n\n**Providing preset responses**\n\nWhen returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses.\n\nAs shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an opt", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": " corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). \n\nThis example illustrates how to use preset responses:\n\n$code_chatinterface_options\n\n", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations:\n\n$code_chatinterface_prefill\n\n", "heading1": "Modifying the Chatbot Value Directly", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API. The API route will be the name of the function you pass to the ChatInterface. So if `gr.ChatInterface(respond)`, then the API route is `/respond`. The endpoint just expects the user's message and will return the response, internally keeping track of the message history.\n\n![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)\n\nTo use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a:\n\n* Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app)\n* Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)\n\n", "heading1": "Using Your Chatbot via API", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories.\n\nTo enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.\n\n", "heading1": "Chat History", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode=\"manual\")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). \n\nYou can also change the feedback options via `flagging_options` parameter. The default options are \"Like\" and \"Dislike\", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled:\n\n$code_chatinterface_streaming_echo\n\nNote that in this example, we set several flagging options: \"Like\", \"Spam\", \"Inappropriate\", \"Other\". Because the case-sensitive string \"Like\" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.\n\n", "heading1": "Collecting User Feedback", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following:\n\n* [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries.\n* If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks).\n* Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}]